CN114647303A - Interaction method, device and computer program product - Google Patents
Interaction method, device and computer program product Download PDFInfo
- Publication number
- CN114647303A CN114647303A CN202011509842.1A CN202011509842A CN114647303A CN 114647303 A CN114647303 A CN 114647303A CN 202011509842 A CN202011509842 A CN 202011509842A CN 114647303 A CN114647303 A CN 114647303A
- Authority
- CN
- China
- Prior art keywords
- virtual object
- displaying
- interactive
- space
- interaction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 98
- 238000000034 method Methods 0.000 title claims abstract description 66
- 238000004590 computer program Methods 0.000 title claims abstract description 40
- 230000002452 interceptive effect Effects 0.000 claims abstract description 147
- 230000003190 augmentative effect Effects 0.000 claims abstract description 24
- 230000000007 visual effect Effects 0.000 claims abstract description 24
- 230000009471 action Effects 0.000 claims description 44
- 230000033001 locomotion Effects 0.000 claims description 15
- 230000007123 defense Effects 0.000 claims description 7
- 239000010977 jade Substances 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 8
- 230000015654 memory Effects 0.000 description 25
- 238000010586 diagram Methods 0.000 description 15
- 238000004080 punching Methods 0.000 description 14
- 230000000694 effects Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 210000004291 uterus Anatomy 0.000 description 3
- 238000005034 decoration Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000000725 suspension Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000009133 cooperative interaction Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000007789 sealing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses an interaction method, an interaction device and a computer program product. Wherein, the method comprises the following steps: obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor; acquiring a virtual object associated with an information point corresponding to the position of an object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation of the object and the virtual object. The invention solves the technical problems that the recording is simply carried out according to the geographical position, the recording mode is single, the interestingness and the playability are not high, and the users are difficult to attract in the related technology.
Description
Technical Field
The invention relates to the technical field of computers, in particular to an interaction method, an interaction device and a computer program product.
Background
When a tourist visits a scenic spot, the tourist spots which have traveled to can be recorded by stamping a stamp on a purchased tourist souvenir book, and the tourist spots which have traveled to can also be recorded by a footprint recording function (namely, a position recording function) provided in an application program. The inventor finds that although the two modes can realize recording of the travel places, the recording process lacks interest and interactivity, and meanwhile, the recording mode is single, so that the mode which can be interacted in relation to the characteristics of the scenic spot cannot be provided for the tourists according to the characteristics of the scenic spot.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides an interaction method, an interaction device and a computer program product, which at least solve the technical problems that in the related technology, recording is simply carried out according to the geographical position, the recording mode is single, the interestingness and the playability are not high, and users are difficult to attract.
According to an aspect of an embodiment of the present invention, there is provided an interaction method, including: obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor; acquiring a virtual object associated with an information point corresponding to the position of the object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation of the object and the virtual object.
According to another aspect of the embodiments of the present invention, there is also provided an interaction method, including: displaying an Augmented Reality (AR) display space corresponding to the position of an object obtained based on an image acquired by a visual sensor; displaying a virtual object to be operated; displaying the interactive operation of the object and the virtual object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
According to another aspect of the embodiments of the present invention, there is provided an interactive apparatus, including: the first acquisition module is used for acquiring an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor; the second acquisition module is used for acquiring a virtual object associated with the information point corresponding to the position of the object; and the first display module is used for displaying the interactive effect of the virtual object in the AR display space based on the operation of the object and the virtual object.
According to another aspect of the embodiments of the present invention, there is also provided an interactive apparatus, including: the second display module is used for displaying an augmented reality AR display space corresponding to the position of an object obtained based on an image acquired by the visual sensor; the third display module is used for displaying the virtual object to be operated; a fourth display module for displaying the interactive operation of the object and the virtual object; and the fifth display module is used for displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
According to still another aspect of embodiments of the present invention, there is provided a computer apparatus including: a memory and a processor, the memory storing a computer program; the processor is configured to execute the computer program stored in the memory, and when the computer program runs, the processor is enabled to execute any one of the above interaction methods.
According to a further aspect of the embodiments of the present invention, there is provided a storage medium, including a stored program, where when the program runs, a device on which the storage medium is located is controlled to execute the interaction method described in any one of the above.
According to a further aspect of embodiments of the present invention, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the interactive method according to any one of the above.
In the embodiment of the invention, an augmented reality AR display space corresponding to the position of an object is obtained based on an image acquired by a visual sensor, a virtual object associated with an information point corresponding to the position of the object is acquired, and the interaction effect of the virtual object in the AR display space is displayed based on the operation of the object and the virtual object, so that the purpose of interacting with the information point by operating the virtual object is achieved, the interaction with the information point through the AR is realized, the interestingness and the playability are high, the technical effect of user experience is improved, and the technical problems that in the related technology, the recording is simply carried out according to the geographical position, the recording mode is single, the interestingness and the playability are not high, and the user is difficult to attract are solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 shows a hardware configuration block diagram of a computer terminal for implementing the interactive method;
FIG. 2 is a flowchart of a first interaction method according to embodiment 1 of the present invention;
FIG. 3 is a flowchart of a second interaction method according to embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of a jade seal card punching process according to an alternative embodiment of the invention;
FIG. 5 is a schematic diagram of gesture actions according to an alternative embodiment of the present invention;
FIG. 6 is a schematic view of a virtual article defensive suspension according to an alternative embodiment of the invention;
FIG. 7 is a schematic diagram of a multi-device interaction in accordance with an alternative embodiment of the present invention;
fig. 8 is a block diagram of a first interaction device according to embodiment 2 of the present invention;
fig. 9 is a block diagram of a second interaction device according to embodiment 3 of the present invention;
fig. 10 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Moreover, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
augmented Reality (AR) is a technology for fusing virtual information with the real world, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, so that the real world is enhanced. And the AR card punching means that after the user arrives at a certain place, a virtual mark is left at the place in an AR manner, and the virtual mark represents the mark of the user coming from the place.
Example 1
There is also provided, in accordance with an embodiment of the present invention, an interactive method embodiment, it being noted that the steps illustrated in the flowchart of the figure may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
The method embodiment provided by embodiment 1 of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1 shows a hardware structure block diagram of a computer terminal (or mobile device) for implementing the interactive method. As shown in fig. 1, the computer terminal 10 (or mobile device) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), memories 104 for storing data, and a transmission device for communication functions. In addition, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the interaction method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the software programs and modules stored in the memory 104, that is, implementing the vulnerability detection method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that, in some embodiments, the computer device (or mobile device) shown in fig. 1 described above has a touch display (also referred to as a "touch screen" or "touch display screen"). In some embodiments, the computer device (or mobile device) shown in fig. 1 above has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human interaction functionality optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
Under the above operating environment, the present application provides an interaction method as shown in fig. 2. Fig. 2 is a flowchart of an interaction method according to embodiment 1 of the present invention. As shown in fig. 2, the process includes the following steps:
step S202, obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor;
step S204, acquiring a virtual object associated with the information point corresponding to the position of the object;
step S206, displaying the interactive effect of the virtual object in the AR display space based on the interactive operation between the object and the virtual object.
Through the steps, the augmented reality AR display space corresponding to the position of the object is obtained by adopting the image collected by the visual sensor, the virtual object associated with the information point corresponding to the position of the object is obtained, the interaction effect of the virtual object in the AR display space is displayed through the operation based on the object and the virtual object, the purpose of interacting with the information point through operating the virtual object is achieved, interaction with the information point through the AR is achieved, interestingness and playability are high, the technical effect of user experience is improved, the technical problem that recording is simply carried out according to the geographical position in the related technology is solved, the recording mode is single, interestingness and playability are not high, and the technical problem that a user is difficult to attract is solved.
As an alternative embodiment, the above mentioned vision sensor may be an instrument for acquiring image information of an external environment by using an optical element and an imaging device, and the performance of the vision sensor may be described by using an image resolution. The accuracy of the vision sensor is not only related to the resolution but also to the detection distance of the object to be measured. The further the object to be measured is, the poorer the absolute positional accuracy thereof. In an optional embodiment of the present invention, the visual sensor may be used to obtain the AR display space where the object is located, that is, the visual sensor may be used to obtain a three-dimensional display space.
As an alternative embodiment, the object may refer to a target for recording, and may be a person, an animal, or an intelligent machine, for example. The Information Point (POI) corresponding to the position of the object may be various, for example, a certain sight Point, a certain representative building (e.g., a symbolic building), and the like. The virtual object associated with the information point corresponding to the position of the object may be a plurality of virtual objects, and the virtual object may be a virtual mark for identifying the information point. For example, for an information point being an attraction, the virtual object may be a virtual mark associated with the attraction space, for example, a virtual object having cultural origin with the attraction or having historical association with the attraction; the virtual object may also be a universal virtual tag that may not exist in a particular environment.
As an optional embodiment, the above interactive operation based on the object and the virtual object may be implemented in various ways, for example, by directly operating a display screen of the device, or by the following ways, for example, to increase the sense of real experience of the user: based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: determining that the virtual object is to be moved to a target location in the AR presentation space; detecting a gesture action of an object; based on the gesture motion, the virtual object is shown moving to the target location. Detecting a gesture action may also include a variety of ways, for example, recognizing a gesture action via a touch screen that includes a touch sensitive element, and in particular, may include sliding, clicking, or making a combined contact with a finger on the touch screen. The gesture action can be responded through the photosensitive sensor, for example, a camera shooting and calculating module which performs video acquisition and processing on the gesture action can be adopted; or comprise light sensitive elements that can react to gesture actions that affect the lighting of the light sensitive elements. The gesture actions for operating the virtual object can be quite abundant. For example, it may include: gesture actions such as moving, selecting, adding, deleting, amplifying, zooming out, three-dimensional rotating, combining or splitting are carried out, and corresponding gesture actions are corresponding to the virtual objects to be changed correspondingly. After the operation corresponding to the gesture action is recognized in the optional mode, the virtual object is moved to the corresponding position according to the operation, and the virtual object is moved to the position in the AR scenery spot space through display. And then satisfy the individualized requirement of user to the AR scene to further improve and adopt AR and sight spot to carry out interactive object-playing nature, promote and use experience.
As an optional embodiment, when the interaction of the virtual object in the AR display space is displayed based on the operation on the virtual object, the method may further include combining with multiple devices to complete the whole interaction, so as to improve the effect of cooperative interaction. For example, it may include: establishing connection among devices of different objects; based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: based on the interactive operation of different objects with connected equipment between the equipment and the virtual object, the interactive effect of the virtual object in the corresponding AR display space is displayed on the equipment of the different objects respectively. The connection between multiple devices of different objects can be implemented in various ways, for example, by using a short-distance communication technology between multiple devices, or within a certain local area network. Through the connection, multiple objects (e.g., multiple users) can operate on the same virtual object. In addition, the same virtual object may also be presented by multiple devices. For example, 2 or more devices can be identified by using the cloud cooperation capability, so that interaction among multiple persons and scenes are realized, the effect of cooperation interaction is realized, and the playability is improved. For example, the information point is used as the scenic spot, one device can be used for scanning the scenic spot environment to establish an AR scenic spot space, and the virtual article is operated; at this time, another device is required to scan the same scenery environment and perform corresponding operation on the virtual item, so as to implement the arrangement or change of the virtual item.
As an optional embodiment, the method may store an interaction result of the virtual object interacting in the AR display space, and share the interaction result. By storing the interaction result, the user can be supported to call the interaction result in the future and share the interaction result with others. For example, when the information point is a sight, the user places a virtual object at a specific position in the AR sight space, and then a virtual mark is formed at the position. When the user opens the AR scenic spot space again, the virtual mark can still be seen as an interaction result of the user, and is shared to others through a sharing function. The person receiving the share may view a 3D model of the virtual object, or when they come to the attraction, may use AR scanning functionality to view the virtual object or virtual marker shared, left by the sharee in the AR attraction space at that location.
As an alternative embodiment, a message uploaded for the interaction result is received. And receiving the message uploaded by the interactive result, namely the message can show the mood and the feeling of the user during playing by the user through uploading the message to the interactive result, so that the user can enjoy the pleasure shared by people, and the use experience is improved.
As an optional embodiment, the above-mentioned message leaving manner may include multiple manners, for example, at least one of the following manners may be included: speech, text, video, photos. The optional embodiment can support the leaving of messages in various data formats, and provides more choices for users.
As an alternative embodiment, the virtual object may include: and the object is matched with the information point corresponding to the position of the object. For example, the virtual objects may include objects having a landmark property in the attraction environment. The object with the landmark property in the scenery spot environment has commemorative significance and cultural value. For example, when the scenic spot is the native uterus, the virtual object can be set as an object having deep relation with the native uterus because the native uterus is a famous human scenic spot and has a strong history, thereby increasing interest and human value. For example, can set up the virtual article when visiting the palace into virtual each court of dynasty imperial seal, will collect the process of virtual imperial seal and visit the palace and combine together, when the visitor reachs the sight spot position that corresponds with specific virtual imperial seal, can trigger the activity, and the visitor can collect virtual imperial seal in the AR space of this sight spot, perhaps uses virtual imperial seal and the AR space of this sight spot to carry out interdynamic.
As an alternative embodiment, the article comprises at least one of: seal, imperial guard and jade seal. The stamping is a universal marking mode in the world, meanwhile, the stamping has a long history tradition in China, and the marking mode similar to the stamping can also comprise a signature or use a pattern designed by a user as a mark. The defending has the meanings of amulet and safety symbol, represents a good wish, and can hang the virtual defending in the space and leave a message by utilizing the environment recognition capability of the AR. The imperial seal as a special seal can be well linked with the cultural heritage of China.
As an alternative embodiment, the interaction between the object and the object may be different for different objects, for example, in the case that the object includes a stamp, the interaction between the object and the stamp includes: controlling the seal to seal in the AR display space; where the item comprises an imperial keeper, the interaction of the object with the imperial keeper comprises: the control gate is hung in the AR display space; under the condition that the article includes the imperial seal, the interoperation of object and imperial seal includes: control the imperial seal and demonstrate in AR show space. It should be noted that the above-mentioned objects include: the stamping, defending and imperial sealing and jade seals are only examples and are not exhaustive.
In an embodiment of the present invention, an interaction method is further provided, and fig. 3 is a flowchart of an interaction method two according to embodiment 1 of the present invention, as shown in fig. 3, the flowchart includes the following steps:
s302, displaying an augmented reality AR display space corresponding to the position of an object obtained based on an image acquired by a visual sensor;
s304, displaying a virtual object to be operated;
s306, displaying interactive operation of the object and the virtual object;
s308, displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
Through the steps, the virtual object to be operated is displayed through the augmented reality AR display space corresponding to the position of the display object, the interactive operation of the display object and the virtual object is realized, the interactive effect of the display virtual object in the AR display space based on the interactive operation is realized, the purpose of interaction between the display operation virtual object and the information point is achieved, the interaction between the display object and the information point is realized, the playability is high, the technical effect of user experience is improved, the problem that in the related technology, the recording is simply carried out according to the geographical position is solved, the recording mode is single, the interestingness and the playability are not high, and the technical problem that a user is difficult to attract is solved.
As an alternative embodiment, displaying the interactive effect of the virtual object in the AR display space based on the interactive operation includes: displaying first prompt information, wherein the first prompt information is used for prompting that the virtual object is to be moved to a target position in the AR display space; displaying a gesture action of the detected object; the virtual object is shown moving to the target location based on the gesture motion. The virtual object is moved to the target position in the AR display space for display based on the gesture motion, so that the user can freely select the interaction behavior with the AR display space, different users can leave different virtual objects at the same position, and the same virtual object can be left at different positions.
As an alternative embodiment, when the first prompt message is displayed, the first prompt message may be used to prompt the virtual object to move to the target position in the AR display space. The prompt information may be text information, or may be other information such as an image, a contour, or voice. For example, fig. 4 is a schematic diagram of a imperial seal card-punching process according to an alternative embodiment of the present invention, as shown in fig. 4, before moving the imperial seal 3D model, a screen displays a position prompt message for moving the imperial seal 3D model to the ground, so that a user knows his own card-punching effect in advance and adjusts his operation.
As an alternative embodiment, the displaying of the gesture motion of the detected object; when presenting the virtual object to the target location based on the gesture motion, the gesture motion may include a variety of ways. Fig. 5 is a schematic diagram of gesture actions according to an alternative embodiment of the present invention, and as shown in fig. 5, the gesture actions and the operation of the virtual object by the gesture actions can be visually displayed through the display screen.
As an alternative embodiment, before displaying the gesture action of the detected object, the method further includes: and displaying second prompt information, wherein the second prompt information is used for prompting the placement position of the gesture action and/or the description of the gesture action. As shown in fig. 5, the second information may include various types of information such as text, graphics, outlines, or voice, etc.
As an optional embodiment, wherein the displaying the interactive effect of the virtual object in the AR display space based on the interactive operation includes: displaying a partial space of an AR presentation space, wherein the AR presentation space is displayed by combining devices of different objects which are connected; displaying a partial interactive operation on the partial space; and displaying partial interactive effects of the virtual object in the partial space based on partial interactive operation, wherein the partial interactive effects in the partial space respectively displayed by the equipment of different objects are combined into the interactive effect of the virtual object in the AR display space. By this alternative embodiment, interaction between multiple objects in the AR presentation space may be achieved. For example, the multi-object interaction may be interaction between multiple devices used by multiple users, for example, 2 devices may be used, or more than 2 devices may be used. The virtual object is based on partial interaction of partial operation in partial space, which can be that multiple devices respectively perform partial operation on the same virtual object, and when all the partial operations are completed, the virtual object can interact according to all the partial operations. Or the virtual object performs partial interaction based on partial operations of a plurality of devices, and the virtual object responds with the partial interaction every time the partial operations are performed.
As an optional embodiment, the interaction result of the virtual object interacting in the AR display space may also be displayed; and previewing the interaction result to be shared. Through previewing the interactive result, the user can know the feeling of watching or listening to the interactive result of the shared person in advance, and the user can conveniently and timely adjust the content to be shared before sharing, so that the purpose of satisfying sharing of the user is achieved.
In the following, an alternative embodiment of the present invention will be described by taking an example in which the information point is a scene and the AR display space is an AR scene space.
The imperial seal card punching process provided by the optional embodiment of the invention is shown in fig. 4, and the card punching in the scenic spot can comprise the following steps:
step 1, when a scenic spot is reached, a card punching prompt appears; step 2, checking a card punching route, wherein the card punching route comprises a tour route and specific scenic spots on the tour route and scenic spot brief introduction; step 3, when a specific scenery spot position is reached, for example, the goal of the palace museum, the AR function can be opened to start card punching; step 4, virtual objects and prompt information, such as a 3D model of the imperial seal and a ground position contour, can appear in the space of the AR scenic spot; step 5, operating and previewing the virtual object, for example, rotating the 3D model of the jade seal and the position of a seal, and starting to punch a card; the imperial seal can be unlocked after the card is successfully punched, namely interaction between the virtual object and the AR scenery spot space is completed, and the imperial seal model can be seen by friends after the imperial seal is shared by the friends.
Gesture motions provided by alternative embodiments of the present invention as shown in fig. 5, the device detection recognition gesture motions may include various ways, for example, a conventional gesture for rotating a drag or a natural gesture for picking up a virtual object with a hand.
Specifically, the interaction mode of the conventional gesture includes the following steps:
step 1, clicking a screen to start interaction; step 2, the equipment detects a ground plane and starts an AR scenic spot space through scanning; step 3; the orientation and the angle of the virtual object are adjusted through operations of dragging, sliding, clicking and the like of a finger on a screen; step 4, dragging the pull-down virtual object, and finishing the interaction between the virtual object and the AR scenic spot space through the action; and 5, resetting the virtual object, and displaying the virtual mark left in the AR scenic spot space through interaction.
For example, the interaction of natural gestures includes the following steps:
step 1, clicking a screen to start interaction; step 2, the equipment detects a ground plane, starts an AR scenic spot space through scanning, and detects a hand through a camera module, for example, the hand can be required to be placed in a virtual frame serving as prompt information; step 3, operating the virtual object in the AR scenic spot space by identifying the hand movement, for example, picking up the virtual object according to the identification of the gesture movement; step 4, detecting the ground and showing prompt information representing the moving position of the virtual object; and 5, placing the virtual object in the designated area according to the gesture action and the prompt message.
Fig. 6 is a schematic view of a virtual article defensive suspension according to an alternative embodiment of the invention. As shown in fig. 6, taking the virtual net red imperial defending hung at the temple in hangzhou, the hanging of the virtual article imperial defending may include the following steps:
step 1, card punching prompt, wherein card punching operation is started according to prompt information; step 2, selecting defense, and selecting a virtual defense, for example, a virtual love defense or a virtual academic defense, as a virtual object to interact with the scene; step 3, identifying an environment, and generating an AR scenery spot space by scanning the scenery spot environment; step 4, hanging, namely moving the virtual defense to the space of the AR scenic spots through gesture actions, for example, hanging the love defense on a tree; and 5, leaving a message, wherein the message can be uploaded to an interaction result after interacting with the virtual object, and the message leaving mode can comprise voice, characters, videos, photos and the like.
FIG. 7 is a diagram illustrating multi-device interaction in accordance with an alternative embodiment of the present invention. As shown in fig. 7, the multi-device interaction method may include the following steps:
step 1, card punching prompt, wherein card punching operation is started according to prompt information; step 2, scanning the environment to find a corresponding position, and simultaneously scanning the scenic spots by a plurality of devices to respectively obtain partial AR scenic spot spaces corresponding to the scenic spots; step 3, equipment interaction is carried out, partial operation is carried out on partial virtual objects in the multiple equipment respectively, interaction between the virtual objects and the AR scenic spot space is completed, for example, an unclosed lock is displayed in the first equipment, a half-core pattern is arranged on the lock, the other half-core pattern is displayed in the second equipment, and after the half-core pattern in the second equipment and the half-core pattern in the first equipment are operated according to the prompt information to form a completed red-core pattern, the lock in the first equipment is closed, and equipment interaction is completed; step 4, equipment interaction: after the virtual object is placed, the message uploaded to the interaction result can be received, and the message leaving mode can comprise voice, characters, videos, photos and the like.
Example 2
According to an embodiment of the present invention, there is further provided a first interaction device for implementing the first interaction method, and fig. 8 is a block diagram of a first interaction device according to an embodiment 2 of the present invention, as shown in fig. 8, the first interaction device 80 includes: a first acquisition module 82, a second acquisition module 84, and a first presentation module 86, which are described below with respect to the interaction apparatus 80.
The first acquisition module 82 is configured to obtain an augmented reality AR display space corresponding to a position where an object is located based on an image acquired by a visual sensor;
a second obtaining module 84, connected to the first obtaining module 82, for obtaining a virtual object associated with an information point corresponding to a position where the object is located;
the first display module 86 is connected to the second obtaining module 84, and configured to display an interaction effect of the virtual object in the AR display space based on operations of the object and the virtual object.
It should be noted that the first obtaining module 82, the second obtaining module 84 and the first displaying module 86 correspond to steps S202 to S206 in embodiment 1, and the three modules are the same as the corresponding steps in the implementation example and the application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules may be operated in the computer terminal 10 provided in embodiment 1 as a part of the apparatus.
Example 3
According to an embodiment of the present invention, a second interaction device for implementing the second interaction method is further provided, and fig. 9 is a block diagram of a second interaction device according to embodiment 3 of the present invention, and as shown in fig. 9, the second interaction device 90 includes: a second display module 92, a third display module 94, a fourth display module 96 and a fifth display module 98, and the second interaction device 90 is described below.
A second display module 92, configured to display an augmented reality AR display space corresponding to a position of an object obtained based on an image acquired by a visual sensor;
a third display module 94, connected to the second display module 92, for displaying the virtual object to be operated;
a fourth display module 96, connected to the third display module 94, for displaying the interactive operation between the object and the virtual object;
and a fifth display module 98, connected to the fourth display module 96, for displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
It should be noted that the second display module 92, the third display module 94, the fourth display module 96 and the fifth display module 98 correspond to steps S302 to S308 in embodiment 1, and the modules are the same as the corresponding steps in the implementation example and application scenario, but are not limited to the disclosure in embodiment 1. It should be noted that the above modules as two parts of the apparatus can be operated in the computer terminal 10 provided in embodiment 1.
Example 4
The embodiment of the invention can provide a computer terminal which can be any computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the application program interaction method: obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor; acquiring a virtual object associated with an information point corresponding to the position of an object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation of the object and the virtual object.
Alternatively, fig. 10 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 10, the computer terminal may include: one or more (only one shown) processors 102, memory 104, and the like.
The memory may be configured to store software programs and modules, such as program instructions/modules corresponding to the interaction method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by operating the software programs and modules stored in the memory, so as to implement the interaction method. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory remotely located from the processor, which may be connected to the computer terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor; acquiring a virtual object associated with an information point corresponding to the position of an object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation of the object and the virtual object.
Optionally, the processor may further execute the program code of the following steps: based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: determining that the virtual object is to be moved to a target location in the AR presentation space; detecting a gesture action of an object; based on the gesture motion, the virtual object is shown moving to the target location.
Optionally, the processor may further execute the program code of the following steps: establishing connection between devices of different objects; based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: based on the interactive operation of different objects of connected equipment between the equipment and the virtual object, the interactive effect of the virtual object in the corresponding AR display space is displayed on the equipment of the different objects respectively.
Optionally, the processor may further execute the program code of the following steps: and storing the interactive result of the virtual object in the AR display space, and sharing the interactive result.
Optionally, the processor may further execute the program code of the following steps: the virtual object includes: and the object is matched with the information point corresponding to the position of the object.
Optionally, the processor may further execute the program code of the following steps: the article includes at least one of: seal, defend, imperial concubine, wherein, under the condition that the article includes the seal, the interactive operation of object and seal includes: operating the seal to seal in the AR display space; where the item comprises an imperial keeper, the interaction of the object with the imperial keeper comprises: the control gate is hung in the AR display space; under the condition that the article includes the imperial seal, the interoperation of object and imperial seal includes: control the imperial seal and demonstrate in AR show space.
The processor may call the information stored in the memory and the application program through the transmission device to execute the program code of the following steps: displaying an Augmented Reality (AR) display space corresponding to the position of an object obtained based on an image acquired by a visual sensor; displaying a virtual object to be operated; interactive operation of the display object and the virtual object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
Optionally, the processor may further execute the program code of the following steps: the interactive effect of show virtual article in AR show space based on interoperation includes: displaying first prompt information, wherein the first prompt information is used for prompting that the virtual object is to be moved to a target position in the AR display space; displaying a gesture motion of the detected object; and showing the virtual object moving to the target position based on the gesture action.
Optionally, the processor may further execute the program code of the following steps: before displaying the gesture motion of the detected object, the method further comprises the following steps: and displaying second prompt information, wherein the second prompt information is used for prompting the placement position of the gesture action and/or the description of the gesture action.
Optionally, the processor may further execute the program code of the following steps: the interactive effect of the virtual object in the AR display space based on the interactive operation is displayed, including: displaying a partial space of an AR presentation space, wherein the AR presentation space is displayed by combining devices of different objects which are connected; displaying a partial interactive operation on the partial space; and displaying partial interactive effects of the virtual object in the partial space based on partial interactive operation, wherein the partial interactive effects in the partial space respectively displayed by the equipment of different objects are combined into the interactive effect of the virtual object in the AR display space.
Optionally, the processor may further execute the program code of the following steps: displaying an interactive result of the virtual object in the AR display space; and previewing the interaction result to be shared.
It can be understood by those skilled in the art that the structure shown in fig. 10 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 10 is a diagram illustrating the structure of the electronic device. For example, the computer terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 5
The embodiment of the invention also provides a storage medium. Optionally, in this embodiment, the storage medium may be configured to store the program code executed by the interaction method provided in embodiment 1.
Optionally, in this embodiment, the storage medium may be located in any one of computer terminals in a computer terminal group in a computer network, or in any one of mobile terminals in a mobile terminal group.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor; acquiring a virtual object associated with an information point corresponding to the position of an object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation of the object and the virtual object.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: determining that the virtual object is to be moved to a target location in the AR presentation space; detecting a gesture action of an object; and displaying the virtual object to move to the target position based on the gesture action.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: establishing connection between devices of different objects; based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: based on the interactive operation of different objects with connected equipment between the equipment and the virtual object, the interactive effect of the virtual object in the corresponding AR display space is displayed on the equipment of the different objects respectively.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: and storing the interactive result of the virtual object in the AR display space, and sharing the interactive result.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the virtual object includes: and the object is matched with the information point corresponding to the position of the object.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the article includes at least one of: seal, defend, imperial concubine, wherein, under the condition that the article includes the seal, the interactive operation of object and seal includes: controlling the seal to seal in the AR display space; where the item comprises an imperial keeper, the interaction of the object with the imperial keeper comprises: the control gate is hung in the AR display space; under the condition that the article includes the imperial seal, the interoperation of object and imperial seal includes: control the imperial seal and demonstrate in AR show space.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps: displaying an Augmented Reality (AR) display space corresponding to the position of an object obtained based on an image acquired by a visual sensor; displaying a virtual object to be operated; displaying interactive operation of the object and the virtual object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the interactive effect of show virtual article in AR show space based on interoperation includes: displaying first prompt information, wherein the first prompt information is used for prompting that the virtual object is to be moved to a target position in the AR display space; displaying a gesture action of the detected object; and showing the virtual object moving to the target position based on the gesture action.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: before displaying the gesture action of the detected object, the method further comprises the following steps: and displaying second prompt information, wherein the second prompt information is used for prompting the placement position of the gesture action and/or the description of the gesture action.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: the interactive effect of the virtual object in the AR display space based on the interactive operation is displayed, including: displaying a partial space of an AR presentation space, wherein the AR presentation space is displayed by combining devices of different objects which are connected; displaying a partial interactive operation on the partial space; and displaying partial interactive effects of the virtual object in the partial space based on partial interactive operation, wherein the partial interactive effects in the partial space respectively displayed by the equipment of different objects are combined into the interactive effect of the virtual object in the AR display space.
Optionally, in this embodiment, the storage medium is further configured to store program code for performing the following steps: displaying an interactive result of the virtual object in the AR display space; and previewing the interaction result to be shared.
Example 6
Embodiments of the present invention also provide a computer program product. Optionally, in this embodiment, the computer program product may include a computer program/instruction, and when executed by a processor, the computer program/instruction implements the steps of the interaction method provided in embodiment 1.
Alternatively, in this embodiment, the computer program product may be executed by a processor of any one of a group of computer terminals in a computer network, or by a processor of any one of a group of mobile terminals.
Optionally, in this embodiment, the computer program/instructions included in the computer program product, when executed by the processor, implement the following steps: obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor; acquiring a virtual object associated with an information point corresponding to the position of an object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation of the object and the virtual object.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: determining that the virtual object is to be moved to a target location in the AR presentation space; detecting a gesture action of an object; and displaying the virtual object to move to the target position based on the gesture action.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: establishing connection between devices of different objects; based on the interactive operation of the object and the virtual object, the interactive effect of the virtual object in the AR display space is displayed, which comprises the following steps: based on the interactive operation of different objects of connected equipment between the equipment and the virtual object, the interactive effect of the virtual object in the corresponding AR display space is displayed on the equipment of the different objects respectively.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: and storing the interactive result of the virtual object in the AR display space, and sharing the interactive result.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: the virtual object includes: and the object is matched with the information point corresponding to the position of the object.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: the article includes at least one of: seal, imperial guard, imperial seals, wherein, under the condition that the article includes the seal, the interactive operation of object and seal includes: controlling the seal to seal in the AR display space; where the item comprises an imperial keeper, the interaction of the object with the imperial keeper comprises: the control gate is hung in the AR display space; under the condition that the article includes the imperial seal, the interoperation of object and imperial seal includes: control the imperial seal and demonstrate in AR show space.
Optionally, in this embodiment, the computer program/instructions included in the computer program product, when executed by the processor, implement the following steps: displaying an Augmented Reality (AR) display space corresponding to the position of an object obtained based on an image acquired by a visual sensor; displaying a virtual object to be operated; displaying interactive operation of the object and the virtual object; and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: the interactive effect of the virtual object in the AR display space based on the interactive operation is displayed, including: displaying first prompt information, wherein the first prompt information is used for prompting that the virtual object is to be moved to a target position in the AR display space; displaying a gesture motion of the detected object; and showing the virtual object moving to the target position based on the gesture action.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: before displaying the gesture action of the detected object, the method further comprises the following steps: and displaying second prompt information, wherein the second prompt information is used for prompting the placement position of the gesture action and/or the description of the gesture action.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: the interactive effect of the virtual object in the AR display space based on the interactive operation is displayed, including: displaying a partial space of an AR presentation space, wherein the AR presentation space is displayed by combining devices of different objects which are connected; displaying a partial interactive operation on the partial space; and displaying partial interactive effects of the virtual object in the partial space based on partial interactive operation, wherein the partial interactive effects in the partial space respectively displayed by the equipment of different objects are combined into the interactive effect of the virtual object in the AR display space.
Optionally, in this embodiment, the computer program/instructions included in the computer program product further implement, when executed by the processor, the following steps: displaying an interactive result of the virtual object in the AR display space; and previewing the interaction result to be shared.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (14)
1. An interaction method, comprising:
obtaining an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor;
acquiring a virtual object associated with an information point corresponding to the position of the object;
and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation of the object and the virtual object.
2. The method of claim 1, wherein presenting the interactive effect of the virtual object in the AR presentation space based on the interaction of the object with the virtual object comprises:
determining that the virtual object is to be moved to a target location in the AR presentation space;
detecting a gesture action of the object;
based on the gesture action, showing the virtual object moving to the target location.
3. The method of claim 1, wherein the method further comprises:
establishing connection between devices of different objects;
based on the interactive operation of the object and the virtual object, displaying the interactive effect of the virtual object in the AR display space, including:
and respectively displaying the interactive effect of the virtual object in the corresponding AR display space on the equipment of different objects based on the interactive operation of different objects of connected equipment on the equipment and the virtual object.
4. The method of claim 1, wherein the method further comprises:
and storing the interactive result of the virtual object in the AR display space, and sharing the interactive result.
5. The method of any of claims 1-4, wherein the virtual object comprises: and the object is matched with the information point corresponding to the position of the object.
6. The method of claim 5, wherein the object comprises at least one of: a seal, a defending device and an imperial seal, wherein,
under the condition that the object comprises the seal, the interactive operation of the object and the seal comprises the following steps: the seal is controlled to be stamped in the AR display space;
where the item comprises an imperial guard, the interaction of the object with the imperial guard comprising: controlling the defense to hang in the AR display space;
in a case where the object comprises an imperial seal, the interaction of the object with the imperial seal comprises: control the jade seal is in show in the AR show space.
7. An interaction method, comprising:
displaying an Augmented Reality (AR) display space corresponding to the position of an object obtained based on an image acquired by a visual sensor;
displaying a virtual object to be operated;
displaying the interactive operation of the object and the virtual object;
and displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
8. The method of claim 7, wherein presenting the interactive effect of the virtual object in the AR presentation space based on the interaction operation comprises:
displaying first prompt information, wherein the first prompt information is used for prompting that the virtual object is to be moved to a target position in the AR display space;
displaying the detected gesture motion of the object;
showing the virtual object moving to the target location based on the gesture action.
9. The method of claim 8, wherein prior to displaying the detected gesture action of the object, the method further comprises:
and displaying second prompt information, wherein the second prompt information is used for prompting the placement position of the gesture action and/or the description of the gesture action.
10. The method of claim 7, wherein presenting the interactive effect of the virtual object in the AR presentation space based on the interaction operation comprises:
displaying a partial space of the AR presentation space, wherein the AR presentation space is displayed by combining devices of different objects which establish connection;
displaying a partial interactive operation on the partial space;
and displaying partial interactive effects of the virtual object in the partial space based on the partial interactive operation, wherein the partial interactive effects in the partial space respectively displayed by the equipment of different objects are combined into the interactive effect of the virtual object in the AR display space.
11. The method of any of claims 7 to 10, wherein the method further comprises:
displaying an interaction result of the virtual object interacting in the AR display space;
and previewing the interaction result to be shared.
12. An interactive device, comprising:
the first acquisition module is used for acquiring an augmented reality AR display space corresponding to the position of an object based on an image acquired by a visual sensor;
the second acquisition module is used for acquiring a virtual object associated with the information point corresponding to the position of the object;
and the first display module is used for displaying the interactive effect of the virtual object in the AR display space based on the operation of the object and the virtual object.
13. An interactive device, comprising:
the second display module is used for displaying an augmented reality AR display space corresponding to the position of an object obtained based on an image acquired by the visual sensor;
the third display module is used for displaying the virtual object to be operated;
a fourth display module for displaying the interactive operation of the object and the virtual object;
and the fifth display module is used for displaying the interactive effect of the virtual object in the AR display space based on the interactive operation.
14. A computer program product comprising computer programs/instructions which, when executed by a processor, carry out the steps of the interaction method of any one of claims 1 to 11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011509842.1A CN114647303A (en) | 2020-12-18 | 2020-12-18 | Interaction method, device and computer program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011509842.1A CN114647303A (en) | 2020-12-18 | 2020-12-18 | Interaction method, device and computer program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114647303A true CN114647303A (en) | 2022-06-21 |
Family
ID=81990503
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011509842.1A Pending CN114647303A (en) | 2020-12-18 | 2020-12-18 | Interaction method, device and computer program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114647303A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115049441A (en) * | 2022-07-26 | 2022-09-13 | 星河视效科技(北京)有限公司 | Method and device for acquiring virtual article based on interactive terminal and electronic equipment |
CN115442658A (en) * | 2022-08-04 | 2022-12-06 | 珠海普罗米修斯视觉技术有限公司 | Live broadcast method and device, storage medium, electronic equipment and product |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120113223A1 (en) * | 2010-11-05 | 2012-05-10 | Microsoft Corporation | User Interaction in Augmented Reality |
US9767613B1 (en) * | 2015-01-23 | 2017-09-19 | Leap Motion, Inc. | Systems and method of interacting with a virtual object |
US20180113505A1 (en) * | 2016-10-26 | 2018-04-26 | Htc Corporation | Virtual reality interaction method, apparatus and system |
US20190196690A1 (en) * | 2017-06-23 | 2019-06-27 | Zyetric Virtual Reality Limited | First-person role playing interactive augmented reality |
CN110892364A (en) * | 2017-07-20 | 2020-03-17 | 高通股份有限公司 | Augmented reality virtual assistant |
US20200098179A1 (en) * | 2018-09-25 | 2020-03-26 | Disney Enterprises, Inc. | Systems and methods to provide a shared interactive experience across multiple presentation devices |
US20200117335A1 (en) * | 2018-10-15 | 2020-04-16 | Midea Group Co., Ltd. | System and method for providing real-time product interaction assistance |
CN111696215A (en) * | 2020-06-12 | 2020-09-22 | 上海商汤智能科技有限公司 | Image processing method, device and equipment |
-
2020
- 2020-12-18 CN CN202011509842.1A patent/CN114647303A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120113223A1 (en) * | 2010-11-05 | 2012-05-10 | Microsoft Corporation | User Interaction in Augmented Reality |
US9767613B1 (en) * | 2015-01-23 | 2017-09-19 | Leap Motion, Inc. | Systems and method of interacting with a virtual object |
US20180113505A1 (en) * | 2016-10-26 | 2018-04-26 | Htc Corporation | Virtual reality interaction method, apparatus and system |
US20190196690A1 (en) * | 2017-06-23 | 2019-06-27 | Zyetric Virtual Reality Limited | First-person role playing interactive augmented reality |
CN110892364A (en) * | 2017-07-20 | 2020-03-17 | 高通股份有限公司 | Augmented reality virtual assistant |
US20200098179A1 (en) * | 2018-09-25 | 2020-03-26 | Disney Enterprises, Inc. | Systems and methods to provide a shared interactive experience across multiple presentation devices |
US20200117335A1 (en) * | 2018-10-15 | 2020-04-16 | Midea Group Co., Ltd. | System and method for providing real-time product interaction assistance |
CN111696215A (en) * | 2020-06-12 | 2020-09-22 | 上海商汤智能科技有限公司 | Image processing method, device and equipment |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115049441A (en) * | 2022-07-26 | 2022-09-13 | 星河视效科技(北京)有限公司 | Method and device for acquiring virtual article based on interactive terminal and electronic equipment |
CN115442658A (en) * | 2022-08-04 | 2022-12-06 | 珠海普罗米修斯视觉技术有限公司 | Live broadcast method and device, storage medium, electronic equipment and product |
CN115442658B (en) * | 2022-08-04 | 2024-02-09 | 珠海普罗米修斯视觉技术有限公司 | Live broadcast method, live broadcast device, storage medium, electronic equipment and product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10984575B2 (en) | Body pose estimation | |
US10839605B2 (en) | Sharing links in an augmented reality environment | |
CN104461318B (en) | Reading method based on augmented reality and system | |
US20210248373A1 (en) | Skeletal tracking using previous frames | |
CN110716645A (en) | Augmented reality data presentation method and device, electronic equipment and storage medium | |
CN110908504B (en) | Augmented reality museum collaborative interaction method and system | |
CN110865708B (en) | Interaction method, medium, device and computing equipment of virtual content carrier | |
CN105468142A (en) | Interaction method and system based on augmented reality technique, and terminal | |
CN111651047B (en) | Virtual object display method and device, electronic equipment and storage medium | |
KR20240090542A (en) | Mirror-based augmented reality experience | |
CN107077749A (en) | Optimize the visual display of media | |
CN110473293A (en) | Virtual objects processing method and processing device, storage medium and electronic equipment | |
CN106982240A (en) | The display methods and device of information | |
CN114647303A (en) | Interaction method, device and computer program product | |
CN112933606A (en) | Game scene conversion method and device, storage medium and computer equipment | |
WO2022247181A1 (en) | Game scene processing method and apparatus, storage medium, and electronic device | |
US11656835B1 (en) | Systems and methods for spatial conversion and synchronization between geolocal augmented reality and virtual reality modalities associated with real-world physical locations | |
CN106156237A (en) | Information processing method, information processor and subscriber equipment | |
CN111899349A (en) | Model presentation method and device, electronic equipment and computer storage medium | |
CN111640190A (en) | AR effect presentation method and apparatus, electronic device and storage medium | |
CN108092950B (en) | AR or MR social method based on position | |
KR102369019B1 (en) | Syetem and method for sharing augmented reality contents | |
Raposo et al. | Revisiting the city, augmented with digital technologies: the SeeARch tool | |
CN109510752A (en) | Information displaying method and device | |
US12136158B2 (en) | Body pose estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |