Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
Aiming at the technical problems that a user cannot find a real playing feeling when playing a game and the interactivity is poor due to a fully virtualized game interaction mode in the prior art, the embodiment of the application provides a solution: virtual content is distributed to the template object in the real scene through the server equipment, and a picture formed by fusing the template object in the real scene and the virtual content in the virtual scene is displayed through the augmented reality equipment. During the game process, the user can synchronously control the virtual content by controlling the template object. Therefore, the user can obtain real touch experience through the template prop and virtual interaction experience through the virtual content, the presence and the fusion of reality and virtuality are really realized, and the game interactivity is greatly improved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is an augmented reality interaction method provided in an embodiment of the present application. The method can be applied to augmented reality devices of users, such as AR glasses, HUD (Head-up display) or other devices capable of realizing augmented reality functions.
In order to implement augmented reality interaction among multiple users, an augmented reality interaction system may be established in the embodiments of the present application, where each user in the system has at least one augmented reality device, and for each augmented reality device in the augmented reality interaction system, the process of implementing augmented reality interaction by each augmented display device is the same, so the following embodiments of the present application take a first augmented reality device as an example to describe in detail the process of implementing augmented reality interaction by each augmented reality device, and the first augmented reality device may be any augmented reality device in the augmented reality interaction system.
As shown in fig. 1, the method includes:
100. identifying identification information contained in a template object in a real scene to which the first augmented reality device belongs according to the interaction instruction;
101. sending the identification information to a server so that the server can distribute virtual content to the template object according to the identification information;
102. receiving the virtual content returned by the server, and displaying the virtual content at the mapping position of the template object in the virtual scene according to the position mapping relation between the real scene and the virtual scene;
103. and following the interactive action of the template object, performing linkage display on the virtual content.
When a user wears the first augmented reality device to interact with other users, each user has a template object. Template objects may appear in the user's line of sight at any time, but not all times need to be identified. For example, in a floor-fighting game, the first augmented reality device does not recognize the template object before the floor-calling party is played, and the server does not assign virtual content to the template object even if the user sees the template object, and the first augmented reality device held by the floor-calling party can recognize the template object only after the floor-calling party is played, without the first augmented reality device recognizing the template object. In this embodiment, when the first augmented reality device receives the interactive instruction, the identification processing for the template object in the real scene is executed, that is, the identification information included in the template object in the real scene is identified.
In this embodiment, the template object is mainly used as a carrier of the virtual content, and may be any object with a certain information carrying capability. In addition, in order to identify the template object, the template object has unique identification information, on the basis of which different template objects can be distinguished by the identification information of the template object.
In some implementations, the template object in the real scene may be a non-electronic information carrier, such as a physical game item like playing card, chess, item knife, item gun, etc. Correspondingly, the identification information may be a recognizable image such as a two-dimensional code image, a barcode image, or the like, which is disposed on the template object. For example, the contents of these game items may be blank, but carry identifying information. For example, these play objects may be blank playing cards containing two-dimensional codes at the four corners or blank chess containing bar codes at the center.
In other implementations, the template object in the real scene may also be an electronic information carrier, for example, a micro display screen or an electronic device with a micro display screen, such as a mobile phone and a tablet computer. Correspondingly, the identification information is the mac address or the IP address of the electronic information carrier.
After the identification information of the template object is identified, the identification information of the template object is sent to a server so that the server can match virtual content for the template object according to the identification information of the template object. Based on this, the identification information of the template object has a role of matching virtual content for the template object, and the server can allocate virtual content to each template object according to the identification information.
In this embodiment, augmented reality presentation is performed based on the template object and the virtual content assigned to the template object by the server. In order to realize the universality of the template object, the template object is set to be in a unified mode, such as playing cards, and the template object comprises identification information for uniquely identifying identity, in the augmented reality interaction, new content is endowed to the template object through virtual content, for example, when the template object is the playing cards, in a game, the server distributes the virtual content to the playing cards, so that each playing card obtains corresponding suit and number; in another game, the server distributes virtual content to the playing cards again, each playing card may obtain different patterns and numbers from the previous playing cards, and the user can present a plurality of patterns and numbers according to the game content only by using a limited number of playing cards without frequently replacing props.
In this embodiment, a camera, for example, a depth camera, may be disposed on the first augmented reality device, and is used to identify the template object and scan the user's interaction. In step 103, the virtual content is tracked and displayed at the mapping position of the template object in the virtual scene according to the position mapping relationship between the real scene and the virtual scene, so that the virtual content can be displayed in a linkage manner along with the interaction of the user on the template object. For example, when a user holds a template object for panning, the virtual content will follow the template object to perform a synchronized panning. For another example, when the user throws the template object out of the field of view, the virtual content disappears within the field of view. Visually, the user may perceive that the virtual content and the template object move as a whole within the field of view.
In this embodiment, the server device allocates virtual content to the template object in the real scene, and the first augmented reality device presents the image obtained by superimposing the template object in the real scene and the virtual content in the virtual scene. During the game process, the user can synchronously control the virtual content by controlling the template object. Therefore, the user can obtain real touch experience through the template prop and virtual interaction experience through the virtual content, the presence and the fusion of reality and virtuality are really realized, and the game interactivity is greatly improved.
In the above or following embodiments, one implementation of step 100 may be:
monitoring an identification permission opening notice sent by the server;
when the identification permission starting notice is monitored, detecting whether a real scene contains the template object;
if the template object is detected, identifying the identification information contained in the template object; if the template object is not detected, whether the template object is contained in the real scene or not can be continuously detected until the template object is detected.
The identification permission opening notification sent by the server may be a game starting notification, an interaction starting notification, a camera opening instruction, and the like. For example, when the first augmented reality device monitors an instruction sent by the server to turn on the AR camera, the camera is turned on, and once the camera is turned on, the first augmented reality device automatically recognizes the identification information of the template object in the field of view.
Of course, besides the above implementation, the interaction instruction may be issued by the user, and the interaction instruction may be an instruction actually issued by the user through voice, physical key or touch, for example, the user calls out a "start game" voice, the user gazes at a "start" button in the virtual scene, and so on.
It should be noted that the implementation of the above-mentioned interactive instruction is merely exemplary, and should not be taken as a specific limitation to the interactive instruction of the present application. According to different practical application situations, the interactive instruction can adopt other implementation modes.
In the above or below embodiments, in order to make the user feel the sense of face-to-face interaction, the first augmented reality device may acquire scene pictures and/or voice data in the real scene in real time; and sending the scene picture and/or the voice data to the server so that the server synchronizes the scene picture and/or the voice data to other augmented reality devices interacting with the first augmented reality device. Similarly, other augmented reality devices receiving the server feedback can receive and synchronously display the scene picture and/or the voice data.
In this embodiment, a lens may be disposed on the first augmented reality device, so as to capture a scene of a real scene where the user is located. When the user interacts with other users in a connecting line, the scene picture of the other party can be presented in the virtual scene.
In order to facilitate online communication between users, in this embodiment, the first augmented reality device may further include an audio component, collect voice data in real time, and perform voice synchronization through the server device, thereby implementing online communication.
The user can see the fused picture of the own template object and the virtual content through the first augmented reality device, and in order to obtain the experience of face-to-face interaction with other users, the virtual content corresponding to other users can be synchronously displayed in the first augmented reality device, so that the interaction experience of multiple users in a virtual-real blended environment can be obtained.
To this end, in the above or the following embodiments, after step 102, the method further comprises:
displaying an interactive interface, wherein the interactive interface at least comprises an interactive control, a template object in a real scene to which the first augmented reality device belongs, virtual content in the virtual scene, and template objects in real scenes to which other augmented reality devices interacting with the first augmented reality device belong;
and responding to the operation of the user on the interaction control, and sending a virtual content sharing notification to the server so that the server can send the sharable virtual content to other augmented reality equipment interacting with the first augmented reality equipment according to the virtual content sharing notification to be displayed.
In this embodiment, the interactive interface is used to show the images of the virtual-real blend of the own party and the images of the virtual-real blend of the opposite party to the user. The interactive interface is also used for providing an interactive control, so that the user can realize interactive control through the operation of the interactive control.
In an actual application, an interactive interface can comprise a self display area and an opposite display area, wherein the self display area can display a picture formed by fusing a template object in a real scene to which a user belongs and a virtual object in a virtual environment, and the self display area can also display an interactive control; the opposite side display area can display character pictures of the opposite side user, template objects in a real scene where the opposite side user belongs or sharable virtual contents corresponding to the opposite side user.
For example, in the ground-fighting host game, the template object can be paper playing cards, the own display area can display the playing cards and virtual suits and numbers, the own display area can also display interactive keys such as 'playing cards', 'passing cards' and the like, the opposite display area can display the playing cards which are not played in the hands of the opposite user, and can also display the playing cards which are played by the opposite user and the virtual suits and numbers corresponding to the played playing cards. When a user sees playing cards and corresponding virtual suits and numbers played by the opposite user in the opposite display area, the user can determine cards to be played next, after selection, the user can play the cards by gazing at a 'playing card' key, at the moment, based on the cards just played, a virtual content sharing notice can be sent to a server, the virtual suits and numbers corresponding to the cards just played are shared to augmented reality equipment of other users, and therefore the other users can see the virtual suits and numbers in the opposite display area.
According to the embodiment, the sharing permission of the virtual content is controlled according to the operation of the user on the interactive control, so that the sharing of the virtual content among the users can be realized, multiple users can obtain the feeling of being in a virtual-real combined interactive environment, and the interactivity is improved.
Fig. 2 is an augmented reality interaction method provided in another embodiment of the present application, and as shown in fig. 2, the method includes:
200. identifying identification information contained in a template object in a real scene to which the first augmented reality device belongs according to the interaction instruction;
201. sending the identification information to a server so that the server can distribute virtual content to the template object according to the identification information;
202. receiving the virtual content returned by the server;
203. acquiring position information of the template object in the real scene;
204. and tracking and displaying the virtual content at the mapping position of the template object in the virtual scene according to the position information of the template object in the real scene and the position mapping relation between the real scene and the virtual scene.
205. And following the interactive action of the template object, performing linkage display on the virtual content.
For the description of steps 200-202, 205, reference is made to the foregoing embodiments, and the description is omitted here.
In this embodiment, in order to display the virtual content more accurately and obtain a better visual effect, when the virtual content returned by the server is received, the virtual content is not randomly displayed in the virtual scene, but the position information of the template object in the real scene is first acquired, and 5 the position mapping relationship between the template object and the virtual content is then calculated according to the position mapping relationship between the real scene and the virtual scene, so that the virtual content can be tracked and displayed at the mapping position of the template object in the virtual scene according to the position information of the template object in the real scene and the position mapping relationship between the template object and the virtual content.
When the virtual content is tracked and displayed at the mapping position of the template object in the virtual scene, the visual effect is best for the user, for example, for poker, the visual habit of the user is best met when the flowers and numbers are displayed at the positions of the four corners, and for chess, the name of the chess piece is best displayed at the central position. In step 205, a coordinate position of the identification information in the real scene may be first obtained, and then, according to the coordinate position of the identification information in the real scene and the position mapping relationship between the real scene and the virtual scene, the coordinate position of the identification information in the virtual scene is determined, and the virtual content is tracked and displayed at the coordinate position of the identification information in the virtual scene. The coordinate position of the identification information in the real scene can be obtained by calculation according to the setting position of the identification information on the template object and the position information of the template object in the real scene.
In order to obtain better visual experience, the display scale of the virtual content can be adjusted according to the moving position of the template object. For example, when the displacement of the template object in the Z-axis direction is monitored, the scaling of the virtual content may be calculated according to the displacement of the template object. Specifically, the scale of the virtual content can be enlarged when the template object moves to the positive direction of the Z axis, and the scale of the virtual content can be reduced when the template object moves to the negative direction of the Z axis, so that the visual difference caused by the fact that the template object is far away from human eyes can be adapted.
Fig. 3 is an augmented reality interaction method provided in another embodiment of the present application, and as shown in fig. 3, the method includes:
300. identifying identification information contained in a template object in a reality scene to which the first augmented reality device belongs according to the interactive instruction;
301. sending the identification information to a server so that the server can distribute virtual content to the template object according to the identification information;
302. collecting the contour features of the template object;
303. sending the outline characteristics of the template object to the server so that the server can generate a virtual object according to the outline characteristics and the virtual content;
304. receiving the virtual object returned by the server, and displaying the virtual object in a tracking manner at the mapping position of the template object in the virtual scene according to the position mapping relation between the real scene and the virtual scene;
305. and performing linkage display on the virtual object along with the interactive action on the template object.
For the description of steps 300-301, 305, reference is made to the foregoing embodiments, and the description is omitted here.
In this embodiment, in order to enhance the sense of virtualization, the server device constructs a virtual object according to the contour feature of the template object and the virtual content. The virtual object may be a 3D model or a two-dimensional image model. The contour features of the template object may be obtained by image recognition by an image recognition component on the first augmented reality device. For example, the first augmented reality device acquires contour features of playing cards and uploads the contour features to the server device, the server device constructs a 3D playing card model according to the contour features of the playing cards, and the suit and the number are drawn on the 3D playing card model according to identification information of the template object.
In this embodiment, the virtual content may be tracked and displayed at the mapping position of the template object in the virtual scene according to the position information of the template object in the real scene and the position mapping relationship between the real scene and the virtual scene. When the virtual object is tracked and displayed on the template object, the virtual content can be tracked and displayed at the position of the identification information of the template object according to the manner provided in the foregoing embodiment, and can also be tracked and displayed at other positions of the virtual object as required.
Certainly, the virtual object may include other model elements in addition to the virtual content, and other model elements may be rendered for the virtual object according to the attribute of the virtual object, for example, for the virtual object generated by the prop knife, after the basic 3D knife model is constructed according to the profile feature of the prop knife, according to the attribute of the prop in the game, a knife handle accessory or a knife sleeve line is added to the basic 3D knife model, so as to enrich the game picture.
In this embodiment, the virtual object generated according to the contour feature of the template object and the virtual content is tracked and displayed on the template object, and when a user interacts with the template object in the augmented reality interaction process, a feeling that the user directly performs the interaction on the virtual object in the virtual environment can be visually obtained, which is beneficial to enhancing the virtualization.
Fig. 4 is an augmented reality interaction method provided in an embodiment of the present application, where the method is applicable to a server device, and the method includes:
400. receiving identification information contained in a template object in a real scene sent by augmented reality equipment;
401. distributing virtual content to the template object according to the identification information;
402. and sending the virtual content to the augmented reality equipment so that the augmented reality equipment tracks and displays the virtual content at the mapping position of the template object in the virtual scene according to the position mapping relation between the real scene and the virtual scene, and performs linkage display on the virtual content along with the interaction action of the template object.
In this embodiment, in the process of interaction of multiple users through the augmented reality device, each user hand holds a template object, the template object may be the above-mentioned non-electronic carrier, such as playing cards, and each user hand may hold a plurality of blank playing cards; or an electronic carrier as described above, such as a micro-display, which can be held in the hand of each user. The template object is mainly used as a bearer of the virtual content and may be any object having a certain information carrying capability. In addition, in order to identify the template object, the template object has unique identification information, based on which different template objects can be distinguished by the identification information of the template object. And when the server terminal receives the identification information contained in the template object in the real scene sent by the augmented reality equipment, the server terminal distributes virtual content to the template object. The server device can distribute virtual content according to preset game rules, for example, for card games, the server device can distribute flowers and numbers to playing cards according to card touching sequence and two-dimensional codes on the playing cards; the server device may also allocate the virtual content according to a preset correspondence between the identification information and the virtual content, for example, for the prop knife, the server may allocate a knife name corresponding to the two-dimensional code on the prop knife to the prop knife according to the two-dimensional code on the prop knife.
The server device can pre-store a plurality of virtual contents, and record the corresponding relation between the identification information of the template prop and the virtual contents after distributing the virtual contents for the template prop so as to check and measure the interactive action. For example, when a user plays a card, the augmented reality device sends identification information corresponding to the current card to the server terminal, and the server terminal can add a played mark to the suit and the number corresponding to the identification information or directly delete the played mark according to the received identification information.
In this embodiment, the server device allocates virtual content for the template prop according to the identification information of the template prop sent by the augmented reality device, and sends the virtual content to the augmented reality device, so that the augmented reality device can perform augmented reality presentation based on the template object in the real scene and the virtual content in the virtual scene, and the user can obtain a visual effect of virtual-real fusion, and can obtain a real sense of touch and a virtual sense of view simultaneously in the interaction process, thereby improving the sense of real participation and enriching the interactivity.
In the above or following embodiment, before step 401, the method further comprises:
sending an identification permission starting notice to the augmented reality equipment according to a preset rule;
when the augmented reality device detects that the template object is included in the reality scene to which the augmented reality device belongs, the identification information included in the template object in the reality scene to which the augmented reality device belongs is identified according to the interactive instruction.
The identification permission opening notification sent by the server can be a game starting notification, an interaction starting notification, a camera opening instruction and the like. For example, when the augmented reality device monitors an instruction sent by the server to turn on the AR camera, the camera is turned on, and once the camera is turned on, the augmented reality device automatically identifies the identification information of the template object in the field of view.
In this embodiment, the server controls the recognition authority of the augmented reality device according to the preset rule, so that the augmented reality device is prevented from executing invalid recognition operation. For example, in a floor-fighting game, according to a preset rule, all users cannot determine what face they will obtain before dealing, and only after the server sends a game start notification to each augmented reality device according to the preset rule, the augmented reality device performs identification of the identification information, and before that, the augmented reality device does not perform identification of the identification information.
Fig. 5 is an augmented reality interaction method according to another embodiment of the present application, and as shown in fig. 5, the method includes:
500. receiving identification information contained in a template object in a reality scene which the augmented reality device belongs to and sent by the augmented reality device;
501. distributing virtual content to the template object according to the identification information;
502. sending the virtual content to the augmented reality equipment so that the augmented reality equipment tracks and displays the virtual content at the mapping position of the template object in the virtual scene according to the position mapping relation between the real scene and the virtual scene, and displays the virtual content in a linkage manner along with the interaction action of the template object;
503. receiving a virtual content sharing notification sent by the augmented reality device;
504. and sending the sharable virtual content to other augmented reality equipment interacting with the augmented reality equipment for display according to the virtual content sharing notice.
For the description of steps 500-502, reference may be made to the above embodiments, which are not repeated herein.
In this embodiment, when the user communicates with each other, the server terminal may synchronize the scene picture and/or the voice data of the real scene sent by the terminal side to other augmented reality devices interacting with the augmented reality device. However, to ensure the privacy of the virtual content, not all of the virtual content may be presented to other augmented reality devices, for example, in a floor game, the cards that are not played should not be visible to other users except the user himself, so the server terminal will set the right to the other augmented reality devices to be invisible to the suits and numbers that are not played, and the suits and numbers that are not played will only be sent to the first augmented reality device, but not to the augmented reality devices of the other users.
And when the user operates the interactive control in the augmented reality equipment, the virtual content sharing notification is sent to the server, and the server sends the sharable virtual content to other augmented reality equipment interacting with the augmented reality equipment according to the received virtual content sharing notification to be displayed.
For example, in a landlord game, after a user of an augmented reality device plays two cards, the server can send virtual suits and numbers corresponding to the two cards to the augmented reality devices of other users for display, so that the two cards are shared and visible for all the users, and therefore the effect that multiple users are immersed in an interactive environment with the combination of virtual and real cards at the same time can be achieved, and the interactivity of the game is enhanced.
Fig. 6 is an augmented reality interaction method provided in another embodiment of the present application, and as shown in fig. 6, the method includes:
600. receiving identification information contained in a template object in a real scene to which the augmented reality device belongs, wherein the identification information is sent by the augmented reality device;
601. distributing virtual content to the template object according to the identification information;
602. receiving the contour feature of the template object sent by the augmented reality equipment;
603. generating a virtual object according to the outline feature and the virtual content;
604. and sending the virtual object to the augmented reality equipment so that the augmented reality equipment tracks and displays the virtual object at the mapping position of the template object in the virtual scene according to the position mapping relation between the real scene and the virtual scene, and performs linkage display on the virtual object along with the interaction action of the template object.
For the description of steps 600-601, 604, reference may be made to the above embodiments, which are not described herein again.
In this embodiment, in order to enrich the screen content in the virtual environment, the server terminal may generate a virtual object according to the received contour feature of the template object and the virtual content, in addition to allocating the virtual content to the template object. The virtual object may be pre-stored in the server device, for example, corresponding virtual content is determined according to the identification information, a corresponding 3D model is determined according to the contour feature, and then the virtual content and the 3D model are combined to generate the virtual object. For the personalized requirements, the virtual object may also be constructed by the server device in real time, for example, a 3D model is constructed according to the contour features, and the virtual content is added to the 3D model to generate the virtual object; other model elements may also be rendered in the 3D model to enrich the visual perception of the virtual object. Of course, other virtual object generation methods may also be used, and the present application is not limited to this specifically.
In this embodiment, the server device generates the virtual object according to the contour feature and the virtual content of the template object, and when the augmented reality device tracks and displays the virtual object on the template object, the user can visually obtain a sense of directly performing an interactive action on the virtual object in the virtual environment during the interactive action of the template object, which is beneficial to enhancing the virtual sense.
Fig. 7 is an augmented reality device according to an embodiment of the present application, as shown in fig. 7, the augmented reality device includes a memory and a processor,
the memory 70 is used to store computer programs and may be configured to store various other data to support operations on the augmented reality device. Examples of such data include instructions for any application or method operating on the terminal, contact data, phonebook data, messages, pictures, videos, etc.;
the memory 70 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 71, coupled to the memory 70, for executing computer programs in the memory for:
identifying identification information contained in a template object in a real scene to which augmented reality equipment belongs according to the interactive instruction;
sending the identification information to a server so that the server can distribute virtual content to the template object according to the identification information;
receiving the virtual content returned by the server, and tracking and displaying the virtual content at the mapping position of the template object in the virtual scene according to the position mapping relation between the real scene and the virtual scene;
and following the interactive action of the template object, performing linkage display on the virtual content.
In some embodiments, the processor 71, after receiving the virtual content returned by the server and displaying the virtual content at the mapping position of the template object in the virtual scene according to the position mapping relationship between the real scene and the virtual scene, is further configured to:
displaying an interactive interface, wherein the interactive interface at least comprises an interactive control, a template object in a real scene to which the augmented reality equipment belongs, virtual content in the virtual scene, and template objects in real scenes to which other augmented reality equipment interacting with the augmented reality equipment belongs;
and responding to the operation of the user on the interactive control, and sending a virtual content sharing notice to the server so that the server can send the sharable virtual content to other augmented reality equipment interacting with the augmented reality equipment according to the virtual content sharing notice for displaying.
In some embodiments, when identifying, according to the interactive instruction, the identification information included in the template object in the reality scene to which the augmented reality device belongs, the processor 71 is specifically configured to:
monitoring an identification permission opening notice sent by the server;
when the identification permission starting notice is monitored, detecting whether a real scene contains the template object;
if yes, identifying the identification information contained in the template object.
In some embodiments, when identifying, according to the interactive instruction, identification information included in a template object in a real scene to which the augmented reality device belongs, the processor 71 is specifically configured to:
acquiring position information of the template object in the real scene;
and displaying the virtual content at the mapping position of the template object in the virtual environment according to the position information of the template object in the real scene and the position mapping relation between the real scene and the virtual scene.
In some embodiments, processor 71 executes computer programs in memory 70 for:
acquiring the coordinate position of the identification information in a real scene;
determining the coordinate position of the identification information in the virtual scene according to the coordinate position of the identification information in the real scene and the position mapping relation between the real scene and the virtual scene;
overlaying the virtual content at the coordinate position of the identification information in the virtual scene.
In some embodiments, processor 71 executes computer programs in memory 70 for: before receiving the virtual content returned by the server,
collecting the contour features of the template object;
sending the outline characteristics of the template object to the server so that the server can generate a virtual object according to the outline characteristics and the virtual content;
the receiving the virtual content returned by the server comprises: and receiving the virtual object returned by the server.
In some embodiments, processor 71 executes computer programs in memory 70 for:
acquiring scene pictures and/or voice data in the real scene in real time;
and sending the scene picture and/or the voice data to the server so that the server synchronizes the scene picture and/or the voice data to other augmented reality equipment interacting with the augmented reality equipment.
In some embodiments, the template object is a non-electronic information carrier, and the identification information is a two-dimensional code image or a barcode image arranged on the template object; or
The template object is an electronic information carrier, and the identification information is the mac address or the IP address of the electronic information carrier.
Further, as shown in fig. 7, the augmented reality device further includes: communication components 72, power components 73, audio components 74, cameras 75, and the like. Only some of the components are schematically shown in fig. 6, and it is not meant that the augmented reality device includes only the components shown in fig. 6.
Wherein the communication component 72 is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further comprises a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The power supply unit 73 supplies power to various components of the device in which the power supply unit is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component 74 may be configured to output and/or input audio signals, among other things. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
The camera 75 may be configured to capture a scene picture and identify the identification information included in the template object.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the augmented reality device in the foregoing method embodiments when executed.
Fig. 8 is a server device according to an embodiment of the present application, and as shown in fig. 8, the server device includes a memory 80 and a processor 81
The memory 80 stores computer programs and may be configured to store various other data to support operations on the server device. Examples of such data include instructions for any application or method operating on the server device, contact data, phonebook data, messages, pictures, videos, and the like.
The memory 80 is implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 81 is coupled to the memory 80 for executing the computer program in the memory 80 for:
receiving identification information contained in a template object in a real scene sent by augmented reality equipment;
distributing virtual content to the template object according to the identification information;
and sending the virtual content to the augmented reality equipment so that the augmented reality equipment tracks and displays the virtual content at the mapping position of the template object in the virtual scene according to the position mapping relation between the real scene and the virtual scene, and performs linkage display on the virtual content along with the interaction action of the template object.
In some embodiments, processor 81, after sending the virtual content to the augmented reality device, is further configured to:
receiving a virtual content sharing notification sent by the augmented reality device;
and sending the sharable virtual content to other augmented reality equipment interacting with the augmented reality equipment for display according to the virtual content sharing notice.
In some embodiments, the processor 81, before receiving the identification information included in the template object in the real scene transmitted by the augmented reality device, is further configured to:
sending an identification permission starting notice to the augmented reality equipment according to a preset rule;
when the augmented reality device detects that the template object is included in the reality scene to which the augmented reality device belongs, the identification information included in the template object in the reality scene to which the augmented reality device belongs is identified according to the interactive instruction.
In some embodiments, the processor 81 executes computer programs in the memory 80 for:
before the virtual content is sent to the augmented reality equipment, receiving the contour feature of the template object sent by the augmented reality equipment;
generating a virtual object according to the outline feature and the virtual content;
the sending the virtual content to the augmented reality device includes: and sending the virtual object to the augmented reality equipment.
Further, as shown in fig. 8, the server apparatus further includes: communications component 82, display 83, power supply component 84, and the like. Only some of the components are schematically shown in fig. 8, and the server is not meant to include only the components shown in fig. 8.
Wherein the communication component 82 is configured to facilitate communication between the device in which the communication component is located and other devices in a wired or wireless manner. The device in which the communication component is located may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display 83 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP), among others. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply 84 provides power to various components of the device in which the power supply is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps that can be executed by the server device in the foregoing method embodiments when executed.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises that element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.