CN118466741A - Virtual scene interaction method and related device - Google Patents

Virtual scene interaction method and related device Download PDF

Info

Publication number
CN118466741A
CN118466741A CN202310152989.7A CN202310152989A CN118466741A CN 118466741 A CN118466741 A CN 118466741A CN 202310152989 A CN202310152989 A CN 202310152989A CN 118466741 A CN118466741 A CN 118466741A
Authority
CN
China
Prior art keywords
virtual
virtual object
view field
reality scene
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310152989.7A
Other languages
Chinese (zh)
Inventor
邬文捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310152989.7A priority Critical patent/CN118466741A/en
Publication of CN118466741A publication Critical patent/CN118466741A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/28Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an interaction method and a related device of a virtual scene, which can be applied to scenes such as digital people, virtual people, games, virtual reality, augmented reality and the like. A first view of the virtual object is displayed, the first view including content of the virtual object viewed in the first view in the virtual reality scene. When the user needs to move in the virtual reality scene, the user inputs touch gesture operation through the interaction controller. And if touch gesture operation aiming at the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises the content of the virtual object observed in the virtual reality scene in a second view field, and the second view field is a view field obtained after the virtual object moves. Therefore, the gesture is not required to be recognized based on a visual algorithm, compared with the gesture recognition response speed, the gesture recognition response speed is higher, the efficient execution of the motion control instruction is effectively ensured, and the experience fluency is improved.

Description

Virtual scene interaction method and related device
Technical Field
The application relates to the technical field of virtual reality, in particular to an interaction method and a related device of a virtual scene.
Background
Extended Reality (XR) refers to a virtual and real combined, human interactive environment created by computer technology and wearable devices. XR is a generic term for Virtual Reality (VR), augmented Reality (Augmented Reality, AR), and Mixed Reality (MR), which brings the user with a seamless transition "immersive" between the Virtual world and the real world by fusing the visual interaction technologies of the three. With the advent of the information age in recent years, XR technology has been widely used in various fields such as games, education, live broadcasting, medical treatment, video, industrial manufacturing, etc., bringing great convenience and enjoyment to people's life.
In an XR scene, the motion control needs to control motion such as movement or steering frequently occur, and at present, motion control is mainly performed through gestures, for example, in a VR scene included in the XR, sequence images of naked hand gestures and actions can be captured through a built-in camera of a VR head display, analysis and judgment are performed on the sequence images, and therefore after a predefined movement/steering gesture is identified, the VR head display executes corresponding motion control instructions to complete movement/steering.
However, the gesture needs to be recognized based on a visual algorithm, and a multi-step process is required to be performed to complete gesture recognition, so that a motion control command based on the gesture has time delay, and the response speed and the experience smoothness of the motion control command are affected.
Disclosure of Invention
In order to solve the technical problems, the application provides an interaction method and a related device for a virtual scene, which do not need to identify gestures based on a visual algorithm, greatly simplify the response flow of motion control, and effectively ensure the efficient execution of motion control instructions and improve the experience fluency compared with the gesture identification response speed.
The embodiment of the application discloses the following technical scheme:
In one aspect, an embodiment of the present application provides a method for interaction of virtual scenes, where the method includes:
Displaying a first view field picture of a virtual object, wherein the first view field picture comprises content observed by the virtual object in a first view field in a virtual reality scene;
And if touch gesture operation aiming at the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises content observed by the virtual object in the second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
In one aspect, an embodiment of the present application provides an interaction system for a virtual scene, where the system includes an interaction device for the virtual scene and an interaction controller, and the interaction device for the virtual scene and the interaction controller are connected through a network:
the interaction equipment of the virtual scene is used for displaying a first view field picture of the virtual object, wherein the first view field picture comprises content observed by the virtual object in a first view field in the virtual reality scene;
The interaction controller is used for inputting touch gesture operation;
And the interaction device of the virtual scene is further configured to, if a touch gesture operation for the interaction controller is acquired, respond to the touch gesture operation, control the virtual object to move in the virtual reality scene, and display a second view field picture of the virtual object, where the second view field picture includes content observed by the virtual object in a second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
In one aspect, an embodiment of the present application provides an interaction device for a virtual scene, where the device includes a display unit and a control unit:
the display unit is used for displaying a first view field picture of the virtual object, wherein the first view field picture comprises contents observed by the virtual object in a first view field in a virtual reality scene;
The control unit is used for responding to the touch gesture operation if the touch gesture operation aiming at the interaction controller is acquired, and controlling the virtual object to move in the virtual reality scene;
The display unit is further configured to display a second view field picture of the virtual object, where the second view field picture includes content observed by the virtual object in a second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
In one aspect, an embodiment of the present application provides an interaction device for a virtual scene, where the interaction device for a virtual scene includes a processor and a memory:
The memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of any of the preceding aspects according to instructions in the program code.
In one aspect, embodiments of the present application provide a computer readable storage medium for storing program code which, when executed by a processor, causes the processor to perform the method of any one of the preceding aspects.
In one aspect, embodiments of the present application provide a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the preceding aspects.
According to the technical scheme, the first visual field picture of the virtual object can be displayed, and the first visual field picture comprises the content observed by the virtual object in the first visual field in the virtual reality scene. When a user needs to control a virtual object to move in a virtual reality scene so that the user immersively feels that the user moves, the user can input touch gesture operation through the interaction controller. Because the touch gesture operation only needs the user to touch the interactive controller, the user does not need to exert effort, thereby saving effort and greatly reducing the physical power consumption of the user. And if touch gesture operation aiming at the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises the content of the virtual object observed in the virtual reality scene in a second view field, and the second view field is a view field obtained after the virtual object moves. According to the application, the interactive controller is used as the input equipment for touch gesture operation, the touch gesture operation can be directly received, and then the motion control instruction for controlling the motion of the virtual object is triggered, so that the gesture is not required to be recognized based on a visual algorithm, the response flow of motion control is greatly simplified, compared with the gesture recognition response speed, the high-efficiency execution of the motion control instruction is effectively ensured, and the experience fluency is improved. In addition, because the gesture is not required to be recognized based on a visual algorithm, the computing resource and the electricity consumption can be saved, and uncomfortable feeling of a user caused by heat dissipation of interaction equipment of a virtual scene can be effectively reduced.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a flowchart of an interaction method of a virtual scene provided in an embodiment of the present application;
Fig. 2 is an application scenario architecture diagram of an interaction method of a virtual scenario provided in an embodiment of the present application;
FIG. 3 is a flowchart of an interaction method of a virtual scene according to an embodiment of the present application;
fig. 4 is an exemplary diagram of displaying a first view screen based on a head display device according to an embodiment of the present application;
FIG. 5 is an exemplary diagram of an interactive controller in the form of a handle provided in an embodiment of the present application;
FIG. 6 is an exemplary diagram of an interactive controller in the form of a joystick provided in accordance with an embodiment of the present application;
FIG. 7 is a diagram illustrating an example of a user performing a trigger gesture operation based on an interactive controller according to an embodiment of the present application;
FIG. 8 is a diagram illustrating an exemplary movement from a first position to a second position according to an embodiment of the present application;
FIG. 9 is an exemplary diagram of an outgoing arcuate conveyor line in accordance with an embodiment of the present application;
FIG. 10a is a diagram illustrating a first sliding operation performed by a user by a thumb to slide up according to an embodiment of the present application;
FIG. 10b is a diagram illustrating the vibration of a finger ring when an arcuate conveyor line intersects the ground in accordance with an embodiment of the present application;
FIG. 11 is an exemplary diagram of a touch operation end provided by an embodiment of the present application;
Fig. 12 is an exemplary diagram of a second sliding operation performed by a user through a thumb for left sliding according to an embodiment of the present application;
fig. 13 is an exemplary diagram of controlling a virtual object to rotate to the left by a target angle according to an embodiment of the present application;
FIG. 14 is an exemplary diagram of ring shake upon completion of a rotation provided by an embodiment of the present application;
FIG. 15 is an exemplary diagram of attention ordering for a different operation provided by an embodiment of the present application;
FIG. 16 is a flowchart illustrating an interaction method for completing a virtual scene by matching various hardware according to an embodiment of the present application;
FIG. 17 is a diagram illustrating an exemplary implementation flow of a transient motion according to an embodiment of the present application;
FIG. 18 is a diagram illustrating an example flow for implementing steering according to an embodiment of the present application;
Fig. 19 is a block diagram of an interaction device for a virtual scene according to an embodiment of the present application;
Fig. 20 is a block diagram of a terminal according to an embodiment of the present application;
fig. 21 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
With the advent of the information age, XR technology was widely used in various fields such as games, education, live broadcast, medical treatment, video, industrial manufacturing, etc., bringing great convenience and enjoyment to people's life.
XR is an emerging concept that refers to a virtual and real combined, man-machine interactive environment created by computer technology and wearable devices. XR is a generic term for VR, AR, and MR, and by fusing the visual interaction technologies of the three, it brings the user with a seamless transition "immersion" between the virtual world and the real world.
VR is the use of device simulation to create a fully virtual world, through the wearing of VR devices into the virtual world, an immersive experience is achieved.
AR is to superimpose virtual information on a real world, combine a real world scene with a virtual world scene, and experience through electronic devices such as a smart phone and a tablet computer. For example, a real world scene is scanned by a camera of a smart phone, if processing information of a table is embedded in an AR program, when the table is scanned, a screen of the smart phone may display "enhanced" information such as price, production date, etc. of the table, in addition to the table.
MR is a fused modality of VR and AR, capable of fusing the real world and the virtual world, generating a new visual environment, and the generated virtual object is capable of real-time interaction with the real world. Taking the example that the virtual object generated under the MR technology is a virtual cartoon character, the virtual cartoon character can react in real time according to the scene of the real world, such as meeting water to detour, jumping to a desk, lying on a sofa, etc.
In XR scenarios, interaction requirements often occur, where the interaction requirements may refer to requirements for controlling movements such as movement or steering, i.e. the interaction achieved by embodiments of the present application mainly refers to movement control. Wherein, moving may refer to moving from one location to another location of the virtual reality scene. The movement may include, for example, a momentary movement, a gradual movement, etc., which is one of the common ways of moving, and the user performs an interactive operation of instantaneously transmitting himself from the start point to the desired location. Steering may refer to a user manipulating the rotation of a virtual lens (or a virtual object representing the user's perspective) in a virtual reality scene through an interactive controller, completing the angular change of the field of view in three-dimensional space.
In one possible implementation, motion control may be performed by gestures, such as shown in FIG. 1, and FIG. 1 shows an exemplary diagram of a process for gesture-based motion control, such as snap-in motion. Referring to the drawing identified in fig. 1 (a), the last three fingers (middle finger, ring finger and little finger) remain crimped, the index finger and thumb straighten, and a transient arcuate conveyor line appears; referring to the graph identified in fig. 1 (b), the index finger is crimped to trigger a snap instruction; as shown in the graph identified in fig. 1 (c), the trigger is successful, the arc-shaped transmission line disappears, and the virtual object is instantaneously displaced to the corresponding position; referring to the view identified in fig. 1 (d), the index finger straightens again and the arcuate conveyor line reappears supporting the next snapshot.
However, the gesture needs to be recognized based on a visual algorithm, and a multi-step process is required to be performed to complete gesture recognition, so that a motion control command based on the gesture has time delay, and the response speed and the experience smoothness of the motion control command are affected.
In order to solve the technical problems, the embodiment of the application provides an interaction method of a virtual scene, when a user needs to control a virtual object to move in a virtual reality scene so that the user can immersively feel that the user moves, the user can input touch gesture operation through an interaction controller. According to the application, the interactive controller is used as the input equipment for touch gesture operation, the touch gesture operation can be directly received, and then the motion control instruction for controlling the motion of the virtual object is triggered, so that the gesture is not required to be recognized based on a visual algorithm, the response flow of motion control is greatly simplified, compared with the gesture recognition response speed, the high-efficiency execution of the motion control instruction is effectively ensured, and the experience fluency is improved. In addition, as the touch gesture operation only needs the user to touch the interaction controller, the user does not need to exert effort, so that effort is saved, and the physical consumption of the user is greatly reduced.
It should be noted that, the interaction method of the virtual scene provided by the embodiment of the application can be applied to scenes such as digital people, virtual people, games, virtual reality, and extended reality, and is used for controlling the virtual object to move in the virtual reality scene so as to enable the user to feel that the user moves in an immersive manner.
The interaction method of the virtual scene provided by the embodiment of the application can be executed by the interaction equipment of the virtual scene, and the interaction equipment of the virtual scene can be a terminal, a server or both the terminal and the server. Terminals include, but are not limited to, smart phones, computers, intelligent voice interaction devices, smart appliances, vehicle terminals, aircraft, XR devices (e.g., head-mounted devices), and the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing service.
As shown in fig. 2, fig. 2 shows an application scene architecture diagram of an interaction method of a virtual scene, and the application scene is described by taking an example that an interaction device of the virtual scene is a head display device.
The head-display device 100 and the interaction controller 200 may be included in the application scene, and the head-display device 100 and the interaction controller 200 may constitute an interaction system of the virtual scene. The head display device 100 is used for providing a visual field picture of a virtual reality scene, the interaction controller 200 is connected with the head display device 100 through a network, and the interaction controller 200 is used as an input device for touch gesture operation and used for manipulating a virtual object to move in the virtual reality scene. The virtual reality scenario may be a three-dimensional (3D) room, and The network may be a wired network or a wireless network, and The wireless network may be, for example, bluetooth, a fourth generation mobile information (The 4th Generation Mobile Communication,4G) network, a fifth generation mobile information (The 5th Generation Mobile Communication,5G) network, a 2.4G network (i.e., a wireless network with a transmission frequency of 2.4 ghz), etc., which is not limited in The embodiment of The present application.
In use, a user may wear the head-display device 100, and the head-display device 100 may display a first view of the virtual object, such that the first view may be presented to the user for viewing. The first view screen may include content of a virtual object observed in the first view in the virtual reality scene, and the virtual object may be a presentation of a user in the virtual reality scene, and since the first view screen is a screen of the first person viewing angle, the virtual object or a part of the virtual object is not displayed in the first view screen, for example, a hand for manipulating the interactive controller.
When a user needs to control a virtual object to move in a virtual reality scene so that the user feels that the user is moving, the user may input a touch gesture operation through the interactive controller 200. Because the touch gesture operation only needs the user to touch the interactive controller, the user does not need to exert effort, thereby saving effort and greatly reducing the physical power consumption of the user.
If the head-display device 100 acquires a touch gesture operation with respect to the interactive controller 200, the head-display device 100 controls the virtual object to move in the virtual reality scene in response to the touch gesture operation, and displays a second view field picture of the virtual object, wherein the second view field picture comprises contents observed by the virtual object in the second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves, so that the user feels that the user moves, and the user observes the contents which the user wants to see. Because the interactive controller 200 is used as the input device for touch gesture operation, the touch gesture operation is directly received, and then the motion control instruction for controlling the motion of the virtual object is triggered, the gesture is not required to be recognized based on a visual algorithm, the response flow of motion control is greatly simplified, compared with gesture recognition, the response flow can be responded in real time, the efficient execution of the motion control instruction is effectively ensured, and the experience smoothness is improved.
Next, a detailed description will be given of an interaction method of a virtual scene provided by an embodiment of the present application with reference to the accompanying drawings. Referring to fig. 3, fig. 3 shows a flowchart of a method of interaction of a virtual scene, the method comprising:
s301, displaying a first view field picture of a virtual object, wherein the first view field picture comprises contents observed by the virtual object in a first view field in a virtual reality scene.
The interactive device of the virtual scene can provide a visual field picture of the virtual reality scene by applying the XR technology, and the interactive controller is used as an input device operated by touch gestures, is connected with the interactive device of the virtual scene through a network and is used for controlling the virtual object to move in the virtual reality scene. In one possible implementation manner, the interaction device of the virtual scene may be a head display device, and of course, may also be other devices capable of applying XR technology to display and interact, which is not limited by the embodiment of the present application. The embodiment of the application mainly takes the example that the interactive equipment of the virtual scene is the head display equipment as an example.
It should be noted that, the interaction device of the virtual scene may include a display screen and a data operation processing system, and in use, the interaction device of the virtual scene may display a first field of view screen of the virtual object through the display screen, so as to present the first field of view screen to the user for viewing. The first view screen may include content of the virtual object observed in the virtual reality scene in a first view, and the first view may be a view of a current position of the virtual object in the virtual reality scene obtained at a current azimuth angle.
When the interactive device of the virtual scene is the head display device, the user can wear the head display device, and the head display device can display a first visual field picture of the virtual object, so that the first visual field picture can be presented for the user to watch. Referring to fig. 4, in fig. 4, 401 denotes a user who is a user in the real world, 402 denotes a head-display device, and 403 denotes a first view screen that the user sees through the head-display device, in which content that a virtual object observes in a first view in a virtual reality scene is included, for example, may be a virtual object (e.g., a cube in fig. 4) in fig. 4.
It will be appreciated that since the first field of view screen is a screen of a first person perspective, no virtual object or a portion of a virtual object may be displayed in the first field of view screen, such as a hand that manipulates an interactive controller.
And S302, if touch gesture operation for the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises content observed by the virtual object in a second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
When a user needs to control the virtual object to move in the virtual reality scene so that the user feels that the user moves, the user can input touch gesture operation through the interaction controller. If the interaction device of the virtual scene acquires the touch gesture operation aiming at the interaction controller, the virtual object is controlled to move in the virtual reality scene in response to the touch gesture operation, and the visual field of the virtual object can be changed due to the movement, so that the visual field picture observed by a user is changed, the interaction device of the virtual scene can display a second visual field picture of the virtual object, wherein the second visual field picture comprises content observed by the virtual object in the second visual field in the virtual reality scene, and the second visual field is a visual field obtained after the virtual object moves.
Where motion may refer to movement or steering, movement may refer to movement from one location of a virtual reality scene to another. The movement may include, for example, a momentary movement, a gradual movement, etc., which is one of the common ways of moving, and the user performs an interactive operation of instantaneously transmitting himself from the start point to the desired location. Steering may refer to a user manipulating the rotation of a virtual lens (or a virtual object representing the user's perspective) in a virtual reality scene through an interactive controller, completing the angular change of the field of view in three-dimensional space.
Because the touch gesture operation only needs the user to touch the interactive controller, the user does not need to exert effort, thereby saving effort and greatly reducing the physical power consumption of the user.
In one possible implementation, in order to enhance the immersion of the user, the user motion is simulated more realistically by the motion of the virtual object, and a speaker may be further included in the interactive device of the virtual scene, so that audible feedback, such as footstep sounds, is provided by the speaker, so that the user hears the footstep sounds of the motion of the virtual object, the user motion is simulated more realistically, and the presence is enhanced.
An interactive controller is a device that receives user touch gesture operations to manipulate virtual objects to move in a virtual reality scene, which may be designed in different modalities. In one possible implementation, to free up both hands of the user, the interactive controller may be a wearable interactive controller.
In contrast to the interactive controller in the form of a handle or a joystick in another implementation, which may be seen in fig. 5, the interactive controller in the form of a joystick may be seen in fig. 6, both of which require holding. The wearable interaction controller can be directly worn, does not need to occupy two hands due to holding action under the situation of no prop, weakens the existence sense of 'machine' in man-machine interaction, and keeps the motion in-situ sense as much as possible.
It should be noted that, the wearable interactive controller may also be worn in various forms, such as worn on the wrist, on the finger, etc., and the wearable controller may be in the form of a bracelet when worn on the wrist; when the wearable controller is worn on a finger, the wearable controller can be in the form of a ring, a fingerstall, a glove and the like, and the embodiment of the application is mainly described by taking the ring as an example.
When the wearable controller is in the form of a finger ring, the wearable controller can be worn on different fingers and corresponding positions, and in one possible implementation manner, the finger ring serving as the wearable interaction controller can be worn at the second joint of the index finger, so that a user can conveniently execute touch gesture operation through the thumb, and the presence is further enhanced.
It should be noted that, the interaction device of the virtual scene may be connected to the ring through the 2.4G network, and at this time, if a hand is displayed in the display screen of the interaction device of the virtual scene, a model of the ring will appear on the display screen.
It is understood that the touch gesture operation is an operation that requires the user to touch the interactive controller, but hardly requires the user to exert force, for example, the touch gesture operation may be various operations such as a sliding operation, a clicking operation, a double-clicking operation, a touching operation, and the like, which is not limited by the embodiment of the present application. Because the touch gesture operation only needs the user to touch the interactive controller, the user does not need to exert effort, thereby saving effort and greatly reducing the physical power consumption of the user.
It should be noted that, when the interactive controller is used to control the motion of the virtual object, it is critical to be able to sense the touch gesture operation, and how to sense the touch gesture operation is related to the device configuration of the interactive controller. In general, the photo-sensing module (Optical Finger Navigation Module, OFN) and the thermal sensing module may all form an interactive controller for sensing touch gesture operations, which is not limited in this embodiment, and the embodiment of the application will be mainly described by taking OFN as an example.
The OFN may emit infrared rays to irradiate the finger through a Light-Emitting Diode (LED) installed around the sensing area; part of the infrared rays are reflected back and return to the sensing area through the filter cover and the lens; the sensing area quantifies photoelectric sensing data according to the reflected infrared rays, so that whether touch gesture operation exists or not and the type of the touch gesture operation can be determined according to the photoelectric sensing data obtained by the observation area. In this case, the interactive controller includes a photo-sensing module, and the manner of acquiring the touch gesture operation for the interactive controller in S302 may be to acquire photo-sensing data from the photo-sensing module of the interactive controller, and further determine that the touch gesture operation is acquired according to the photo-sensing data.
It may be appreciated that the movement of the virtual object in the virtual reality scene may include a plurality of types, and then different touch gesture operations may be performed for different movements, so as to control the virtual object to perform different movements according to different touch gesture operations, for example, a click operation controls the movement of the virtual object, and a slide operation controls the steering of the virtual object; as another example, a sliding operation controls the virtual object to move, but a different sliding operation controls the virtual object to move differently, a first sliding operation controls the virtual object to move (e.g., snap), and a second sliding operation controls the virtual object to steer.
It should be noted that, since the movement or the turning may have different directions, the movement or the turning of the touch gesture operation controlled in different directions may be further subdivided, for example, when the touch gesture operation is a first sliding operation, the movement (for example, the transient movement) may be controlled by the virtual object, the first sliding operation may be a sliding operation in different directions, for example, may include a sliding up or a sliding down, the sliding up may control the virtual object to advance, the sliding down may control the virtual object to retract, and the embodiment of the present application is not limited thereto. When the interactive controller is a finger ring, the upward sliding can be directed to slide away from the palm, and the downward sliding can be directed to slide close to the palm. Referring to fig. 7, a user is indicated at 701 in fig. 7, the user is a user in the real world, 702 indicates an interactive controller, and for clarity of illustration, a partial enlarged view of a finger ring worn by a user's index finger (i.e., the interactive controller) is shown in a dotted circle in fig. 7, and the user can perform a first sliding operation of sliding up by the thumb, referring to the direction indicated by the arrow in fig. 7.
When the touch gesture operation is a second sliding operation, the second sliding operation may be a sliding operation in a different direction, for example, may include a left sliding or a right sliding, the left sliding may control the virtual object to turn to the left, the right sliding may control the virtual object to turn to the right, and the embodiment of the present application does not limit the present application. The left slide slides to the left of the user, and the right slide slides to the right of the user.
In order to detect different touch gesture operations, so as to accurately control the virtual object to perform different movements, taking the touch gesture operation as a sliding operation as an example, after the photo-induced data is acquired by the interactive device, such as the head display device, of the virtual scene, the manner of determining that the touch gesture operation is acquired according to the photo-induced data may be that the direction and the speed of sliding of the finger are calculated according to the photo-induced data, and then the direction and the speed of sliding of the finger are output in a form of relative coordinates, so as to determine which direction of sliding operation is acquired, and accurately control the virtual object to perform corresponding movements.
In the embodiment of the application, a set of light, labor-saving and efficient motion control scheme can be built through the photoelectric sensing module, the distance between the finger and the interaction controller is shortened through the touch gesture operation of the photoelectric sensing module, the intuitive evolution is carried out on the common motion control operation, the high-frequency motion control operation gradually tends to be insensitive, and the immersion sense of the motion control experience is further improved. In addition, the photoelectric sensing data generated by the photoelectric sensing module is used for recognizing touch gesture operation, visual algorithm recognition is not based, and compared with gesture recognition, the gesture recognition is more real-time in response, so that efficient and stable execution of a motion instruction is effectively ensured.
Next, a detailed description will be given of how to control the virtual object to move according to different touch gesture operations, mainly taking a touch gesture operation including a slide operation as an example. In a possible implementation manner, the touch gesture operation includes a first sliding operation, if the touch gesture operation for the interactive controller is acquired in S302, and in response to the touch gesture operation, the manner of controlling the virtual object to move in the virtual reality scene may be that if the first sliding operation for the interactive controller is acquired, which indicates that the user needs to control the virtual object to move, the interactive device in the virtual scene may control the virtual object to move from a first position in the virtual reality scene to a second position in response to the first sliding operation, where the first position corresponds to forming the first field of view, and the second position corresponds to forming the second field of view. The movement from the first position to the second position may be instantaneous movement or gradual movement, which is not limited in the embodiment of the present application.
Taking the example that the first sliding operation is an up sliding operation as shown in fig. 8, the diagram identified in (a) in fig. 8 is an example diagram of when the user performs the first sliding operation (i.e. the virtual object is in the first position), where 801 indicates that the user wears a finger ring (i.e. the interactive controller) with the index finger of the user, and for clarity of illustration, a partial enlarged view of the finger ring worn by the index finger of the user is shown in a dotted circle in the identified diagram, and the user may perform the up sliding first sliding operation with the thumb, see the direction indicated by the arrow in the identified diagram in (a); 802 denotes a first view screen seen by a user through the head-display device, in which contents, such as cubes, that virtual objects observe in a first view in a virtual reality scene are included. The first sliding operation of the up-slide indicates that the virtual object needs to be controlled to advance, i.e. move towards the direction approaching the cube in 802, so that the interaction device of the virtual scene may control the virtual object to move from the first position to the second position of the virtual reality scene in response to the first sliding operation, and the diagram identified in fig. 8 (b) is an exemplary diagram of the virtual object moving to the second position, at this time, the user perceives itself to move towards the direction approaching the cube in 802.
In some cases, when the user controls the virtual object to move in the virtual reality scene, the user generally wants to control the virtual object to move to a position where the user wants to reach the virtual object according to his own wish, in this case, in order to facilitate the user to observe whether the moved position (for example, the second position) is a position where the user wants to reach the virtual object, the second position may be indicated by an arc transmission line, where in S302, if a first sliding operation for the interactive controller is acquired, the manner of controlling the virtual object to move from the first position to the second position of the virtual reality scene in response to the first sliding operation may be that if the first sliding operation is acquired, the interactive device of the virtual scene sends the arc transmission line along the target transmitting direction with the third position of the interactive controller in the virtual reality scene as a starting point, and the arc transmission line intersects the ground of the virtual reality scene. The arc-shaped transmission line can have a corresponding curve equation, and under the condition that the starting point and the target transmitting direction of the arc-shaped transmission line are known, the intersection point of the arc-shaped transmission line and the ground of the virtual reality scene can be determined, and the arc-shaped transmission line is displayed. And then taking an intersection point of the arc-shaped transmission line and the ground of the virtual reality scene as a second position, and controlling the virtual object to move from the first position to the second position.
Referring to fig. 9, 901 denotes an interactive controller, 902 denotes an arc-shaped transmission line emitted in a target emission direction with a third position of the interactive controller in a virtual reality scene as a starting point, 903 denotes an intersection point at which the arc-shaped transmission line intersects with the ground of the virtual reality scene. In one possible implementation, a region resembling a circle may be presented at the intersection point, making the presentation of the intersection point more intuitive.
According to the embodiment of the application, the arc-shaped conveying line is sent out for selecting and assisting in aiming, so that a user can conveniently determine the moving second position, and the movement is more visual.
It should be noted that, the key of sending out the arc-shaped transmission line is that, on the one hand, the determination of the start point is that before the arc-shaped transmission line is sent out along the target transmission direction with the third position of the interactive controller in the virtual reality scene as the start point in response to the first sliding operation, the start point of the arc-shaped transmission line may also be determined, that is, the third position of the interactive controller in the virtual reality scene is determined. Specifically, when the interactive controller is a wearable interactive controller, the wearable interactive controller is in the form of a ring, and the ring is worn at the second joint of the index finger, the interactive device of the virtual scene can also capture the bare hand image. The mode of capturing the bare hand image by the interaction device of the virtual scene is that the bare hand image is captured through a camera, namely the interaction device of the virtual scene further comprises the camera, and when the bare hand of the user enters the visual field range of the camera, the bare hand image can be obtained. And then, carrying out image recognition on the bare hand image to obtain a fourth position of the bare hand in the virtual reality scene, and determining a third position of the interaction controller in the virtual reality scene based on the relative position relation between the second joint of the index finger and the bare hand and the fourth position.
The key of sending out the arc-shaped transmission line is that the target transmission direction of the arc-shaped transmission line can be determined before the arc-shaped transmission line is sent out along the target transmission direction by taking the third position of the interaction controller in the virtual reality scene as the starting point in response to the first sliding operation. The means for determining the target emission direction may include a plurality of ways, and in one possible implementation, the interaction controller may further include an inertial measurement unit (Inertial Measurement Unit, IMU), where the means for determining the target emission direction may be to obtain measurement data of the inertial measurement unit, convert the measurement data into a quaternion, and determine the target emission direction of the arcuate conveyor line based on the quaternion. At this point, the arcuate conveyor line may be referred to as an IMU ray.
IMUs are a common type of sensor that provides measurement data in a time series format. IMUs generally include an accelerometer that is responsible for acceleration measurements and a gyroscope that provides acceleration of motion in the xy, z axis in its local coordinate system. The gyroscope is responsible for angular velocity measurement, measuring angular velocities around the x, y, z axes in its local coordinate system, and integrating the measurement results into the angle itself. Since the accelerations/angular velocities are each represented in a three-axis coordinate system, they together produce a 6-dimensional data stream, which is represented by a time series representation, resulting in corresponding measurement data.
The measurement data acquired through the IMU are fused into quaternions (x, y, z, w) through an algorithm to represent the physical orientation (i.e. the target emission direction) of the interactive controller, and the quaternions are used as a mathematical format, wherein x, y, z, w are only four parameters for recording and cannot directly reflect the deflection angle about a certain axis, so that the euler angle can be further calculated based on the quaternions, and thus the target emission direction is represented through the euler angle.
In one possible implementation, when the arc-shaped transmission line intersects the ground of the virtual reality scene, the interaction device of the virtual reality scene may further control the interaction controller to send out the first prompt information, so as to provide feedback for the user. When the first prompt information is vibration, the interactive controller can comprise a linear motor, and vibration is sent out through the linear motor.
Taking the form of an interactive controller as an example, referring to fig. 10a and 10b, the finger ring is worn on the index finger of the user, when the user performs the first sliding operation of sliding upwards by the thumb, an arc-shaped transmission line can be sent out, when the arc-shaped transmission line intersects with the ground, a region close to a circle can be displayed at the intersection point, and the vibration of the finger ring is controlled to provide tactile feedback for the second joint of the index finger of the user. The ring vibration at this time is shown by the illustration within the dashed rectangle in fig. 10 b.
By the method, the virtual empty feeling of gesture interaction can be reduced, and more accurate and real-time operation feedback is provided for the bare hand of the user.
In some cases, the user may need a certain time to determine the second position, and the first sliding operation may be completed relatively quickly, so in order to give the user enough time to determine the second position, in one possible implementation, the touch gesture operation may further include a touch operation in addition to the first sliding operation, that is, the user maintains a continuous touch operation when the first sliding operation is completed. At this time, in S302, if the first sliding operation is acquired, in response to the first sliding operation, the manner of sending the arc-shaped transmission line along the target transmission direction with the third position of the interactive controller in the virtual reality scene as the starting point may be that if the first sliding operation is acquired, and when the first sliding operation is completed, a continuous touch operation is acquired, in response to the first sliding operation, the arc-shaped transmission line is sent along the target transmission direction with the third position of the interactive controller in the virtual reality scene as the starting point. Accordingly, the manner of taking the intersection point of the arc-shaped transmission line intersecting with the ground of the virtual reality scene as the second position and controlling the virtual object to move from the first position to the second position may be to control the arc-shaped transmission line to disappear when the touch operation is ended, and taking the intersection point of the arc-shaped transmission line intersecting with the ground of the virtual reality scene as the second position and controlling the virtual object to move from the first position to the second position.
Referring to fig. 9 and 10, the user performs a first sliding operation of sliding up by the thumb, and the user's thumb maintains a continuous touch operation when the first sliding operation is completed, and an arc-shaped transmission line is emitted from the finger ring. When the touch operation is finished, for example, as shown in fig. 11, the thumb of the user is lifted from the finger ring (see the direction indicated by the arrow in fig. 11), the finger ring is no longer touched, and at this time, the arc-shaped transmission line disappears, and the virtual object is controlled to move from the first position to the second position.
It will be appreciated that during the duration of the touch operation, if the user's hand moves (e.g., moves up and down, turns around the wrist, etc.), thereby causing the interactive controller to change in the third position in the virtual reality scene, the starting point and the target transmit direction may be updated to update the displayed arc-shaped transmission line.
By the method, the second position to be moved can be adjusted according to the requirement of the user in the continuous process of the touch operation, so that the requirement of the user is met, and the user experience is improved.
In one possible implementation manner, the touch gesture operation includes a second sliding operation, and if the touch gesture operation for the interactive controller is acquired in S302, the manner of controlling the virtual object to move in the virtual reality scene in response to the touch gesture operation may be to control the rotation target angle of the virtual object according to the sliding direction of the second sliding operation if the second sliding operation for the interactive controller is acquired. The target angle may be a preset fixed angle, for example, 30 degrees, 45 degrees, or the like, or may be a corresponding relation with the sliding amplitude of the second sliding operation, and may be determined according to the sliding amplitude, which is not limited in the embodiment of the present application.
Referring to fig. 12, taking a sliding operation in which the second sliding operation is left sliding as an example, when the user performs the second sliding operation of left sliding by means of the thumb (for example, shown by an arrow in fig. 12), the interactive apparatus of the virtual scene controls the virtual object to rotate to the left by a target angle in response to the second sliding operation, taking a preset 45 degree target angle as an example, at which time the angle of the cube observed by the user changes, the cube in the first view displayed to the user before rotation is shown as a dotted cube in fig. 13, and the cube in the second view displayed to the user after rotation is shown as a solid cube in fig. 13.
In one possible implementation, when the virtual object rotates by the target angle, that is, the steering is completed, the interaction device of the virtual scene may further control the interaction controller to send out the second prompt information, so as to provide feedback for the user. The second prompt information can vibrate, sound and the like, and when the second prompt information vibrates, the interactive controller can comprise a linear motor, and vibration is sent out through the linear motor.
Taking the form of the interactive controller as an example, referring to fig. 14, the finger ring is worn on the index finger of the user, and when the user performs the second sliding operation of sliding left through the thumb, the virtual object is controlled to rotate by a target angle (for example, 45 degrees), and the finger ring is controlled to vibrate, so as to provide tactile feedback for the second joint of the index finger of the user. The ring vibration at this time is shown by the illustration within the dashed circle in fig. 14.
By the method, the virtual empty feeling of gesture interaction can be reduced, and more accurate and real-time operation feedback is provided for the bare hand of the user.
Through the above control of movement and steering, it can be found that in the motion control experience, the interactive controller can be triggered to vibrate at two key moments, respectively when the arc-shaped transmission line intersects the ground, and when steering is completed. Many operations may be performed in an XR experience, such as games, which may include core interactions (e.g., hitting targets, finding props, obtaining resources, and avoiding attacks), common operations (e.g., moving and steering), and operations on application tools (e.g., menu settings), the ordering of which may be seen in fig. 15, where the attention required from top to bottom should be gradually reduced. That is, movement and steering are common operations in VR, AR, MR and XR scenarios, and the main effect is to allow the user to walk to the destination efficiently and accurately, without spending excessive effort, and with saving effort to perform core interactions. Therefore, the embodiment of the application adds related vibration feedback at the two key moments respectively to prompt the user to finish the operation, so that the user forms touch memory and can clearly sense the walking steps and angles without dispersing too much attention. Thus, the user can focus more on the core interaction operations, such as searching for props, striking targets, avoiding attacks and the like.
According to the technical scheme, the first visual field picture of the virtual object can be displayed, and the first visual field picture comprises the content observed by the virtual object in the first visual field in the virtual reality scene. When a user needs to control a virtual object to move in a virtual reality scene so that the user immersively feels that the user moves, the user can input touch gesture operation through the interaction controller. Because the touch gesture operation only needs the user to touch the interactive controller, the user does not need to exert effort, thereby saving effort and greatly reducing the physical power consumption of the user. And if touch gesture operation aiming at the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises the content of the virtual object observed in the virtual reality scene in a second view field, and the second view field is a view field obtained after the virtual object moves. According to the application, the interactive controller is used as the input equipment for touch gesture operation, the touch gesture operation can be directly received, and then the motion control instruction for controlling the motion of the virtual object is triggered, so that the gesture is not required to be recognized based on a visual algorithm, the response flow of motion control is greatly simplified, compared with the gesture recognition response speed, the high-efficiency execution of the motion control instruction is effectively ensured, and the experience fluency is improved. In addition, because the gesture is not required to be recognized based on a visual algorithm, the computing resource and the electricity consumption can be saved, and uncomfortable feeling of a user caused by heat dissipation of interaction equipment of a virtual scene can be effectively reduced.
The above description describes an interaction method of a virtual scene, the implementation of the interaction method of the virtual scene may be based on a man-machine integrated hardware architecture, the "man" part is a bare hand of a user, the machine hardware part may include an interaction device and an interaction controller of the virtual scene, and in the embodiment of the present application, the interaction device of the virtual scene is mainly described by taking a head display device and the interaction controller as an example.
Referring to fig. 16, fig. 16 is a flowchart illustrating an interaction method for completing a virtual scene by combining hardware, where the head display device may include a data operation processing system, a display screen, a speaker, and a camera; the finger ring can comprise an OFN and a linear motor, and is connected with the head display device through a 2.4G network. The OFN is used for receiving touch gesture operation (such as a first sliding operation or a second sliding operation) of a user; the linear motor is used to provide haptic feedback (e.g., vibration); the display/speaker is used for providing the presentation of audiovisual content and feedback input, in particular, the display is used for presenting a visual field picture, and the speaker is used for providing auditory feedback (such as footstep sound); the bare hand can move, and the camera is used for capturing an image of the bare hand when the bare hand moves into the visual field range of the camera; the data operation processing system is used for recognizing touch gesture operation, sending out motion control instructions based on the touch gesture operation to control the motion of the virtual object, controlling visual field images displayed by the display screen or auditory feedback of the loudspeaker, sending out touch feedback instructions to control ring vibration, carrying out image recognition on the bare hand image and the like.
Based on the above-described flow example diagrams, the following describes the implementation flow of motion control from the aspects of instantaneous movement, steering, and tactile feedback, respectively.
For the transient movement, the implementation flow of the transient movement can be shown in fig. 17, and the display screen of the head display device presents a 3D space room environment. When the bare hand of the user enters the visual field of the camera of the head display device, a virtual hand appears in the display screen, the camera of the head display device captures an image of the bare hand, and the head display device is connected with the finger ring through a 2.4G network to acquire touch gesture operation (see 1701 in fig. 17). The head-mounted device recognizes the bare hand image to obtain a fourth position of the bare hand in the virtual reality scene (see 1702 in fig. 17). The head display device recognizes the image recognition of the bare hand image, recognizes the skeleton of the second joint of the index finger, thereby determining a third position of the interactive controller in the virtual reality scene according to the relative position relationship between the second joint of the index finger and the bare hand, and the fourth position, and determining the third position as a starting point of the arc transmission line, that is, the head display device recognizes the skeleton of the second joint of the index finger, so as to obtain the starting point of the arc transmission line (see 1703 in fig. 17). Since the ring as the interaction controller is worn on the second joint of the index finger, the position of the second joint of the index finger is the third position. The positions (e.g., the first position, the second position, the third position, and the fourth position) may be represented by spatial coordinates (e.g., (x, y, z)), and the coordinate system of the spatial coordinates may use a certain position of the ground as an origin, and the plane in which the ground is located is an xz plane, where the y axis passes through the origin and is perpendicular to the xz plane.
The finger ring connection is activated by touching the OFN, the head display device is connected with the finger ring through a 2.4G network, a virtual hand corresponding to the bare hand appears in the display screen, and a finger ring model appears in the virtual hand. As the bare hand of the user moves, the ring is worn on the hand, which moves following the bare hand movement of the user. The head-mounted device acquires measurement data of the inertial measurement unit (see shown at 1704 in fig. 17), converts the measurement data into quaternions, and determines a target emission direction of the arc-shaped transmission line based on the quaternions (see shown at 1705 in fig. 17). The head display apparatus acquires photo-sensing data (see 1706 in fig. 17) through the 2.4G network connection finger ring, detects a first sliding operation of the upward sliding according to the photo-sensing data and maintains a continuous touch operation (see 1707 in fig. 17), thereby determining that the snap occurrence condition is satisfied, and thus, issues an arc-shaped transmission line (see 1708 in fig. 17). The head-display device determines the intersection point at which the arc-shaped transmission line intersects the ground so that the arc-shaped transmission line intersects the ground (see 1709 in fig. 17). When the arc-shaped transmission line is intersected with the ground, the head display device sends a touch feedback instruction to the finger ring through the 2.4G network, and the finger ring vibrates after the linear motor of the finger ring receives the touch feedback instruction, so that the naked hand of a user is touched by the arc-shaped transmission line to fall to the ground. While maintaining the continuous touch operation, the user moves the bare hand, rotates the finger ring, and re-executes S1702 to adjust the intersection of the arc-shaped transmission line and the ground. The head display device judges the position of the second joint of the bare hand and the index finger and the change of the measured data in real time, and updates the starting point of the arc-shaped transmission line, the target transmitting direction and the intersection point of the arc-shaped transmission line and the ground. When the thumb of the user lifts up from the finger ring and the touch operation is finished, the head display device detects the interruption of the touch operation through the 2.4G network (see 1710 in fig. 17), and determines that the snap instruction is completed, and the position of the virtual object (which may also be a virtual lens) after the snap is equal to the position (i.e. the second position) of the intersection point (see 1711 in fig. 17), where the intersection point is in the xz plane of the coordinate system. The arc-shaped transmission line on the display screen of the head display device disappears, and the virtual object moves from the first position to the second position instantaneously.
For turning, the implementation flow of turning can be seen in fig. 18, where the display screen of the head display device presents a 3D spatial room environment. And the finger ring connection is activated by touching the OFN, the head display device is connected with the finger ring through a 2.4G network (see 1801 in fig. 18), a virtual hand corresponding to the bare hand appears in the display screen, and a finger ring model appears in the virtual hand. After the second sliding operation of left or right sliding is performed on the OFN, the thumb leaves the OFN, and the touch operation ends when the second sliding operation is completed. The head display device acquires photoelectric sensing data (see 1802 in fig. 18) through the 2.4G network connection finger ring, and detects a second sliding operation of left sliding or right sliding and a short touch operation (see 1803 in fig. 18) according to the photoelectric sensing data, thereby determining that the steering occurrence condition is satisfied. The head display device sends a touch feedback instruction to the finger ring through the 2.4G network, and the finger ring vibrates after the linear motor of the finger ring receives the touch feedback instruction, so that the naked hand of a user is steered to finish touch. The virtual object on the display screen of the head-mounted display device is rotated to the left or right by a target angle (see 1804 in fig. 18) around the y-axis, for example, 45 degrees according to the second sliding operation of the left or right slide.
It should be noted that, based on the implementation manner provided in the above aspects, further combinations may be further performed to provide further implementation manners.
Based on the interaction method of the virtual scene provided by the corresponding embodiment of fig. 3, the system for interacting the virtual scene in the embodiment of the application comprises interaction equipment and an interaction controller of the virtual scene, wherein the interaction equipment and the interaction controller of the virtual scene are connected through a network:
The interaction equipment of the virtual scene is used for displaying a first view field picture of the virtual object, wherein the first view field picture comprises content observed by the virtual object in a first view field in the virtual reality scene;
The interaction controller is used for inputting touch gesture operation;
And the interaction device of the virtual scene is further configured to, if a touch gesture operation for the interaction controller is acquired, respond to the touch gesture operation, control the virtual object to move in the virtual reality scene, and display a second view field picture of the virtual object, where the second view field picture includes content observed by the virtual object in a second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
It will be appreciated that the interactive system of the virtual scene may be shown in fig. 2, but fig. 2 is described by taking the example that the interactive device of the virtual scene is a head display device.
Based on the interaction method of the virtual scene provided in the corresponding embodiment of fig. 3, the embodiment of the application further provides an interaction device 1900 of the virtual scene. Referring to fig. 19, the interaction device of the virtual scene includes a display unit 1901 and a control unit 1902:
The display unit 1901 is configured to display a first view field screen of a virtual object, where the first view field screen includes content that the virtual object observes in a first view field in a virtual reality scene;
The control unit 1902 is configured to, if a touch gesture operation for the interaction controller is acquired, control the virtual object to move in the virtual reality scene in response to the touch gesture operation;
The display unit 1901 is further configured to display a second view field screen of the virtual object, where the second view field screen includes content that is observed by the virtual object in a second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
In one possible implementation manner, the touch gesture operation includes a first sliding operation, and the control unit 1902 is specifically configured to:
And if the first sliding operation for the interactive controller is acquired, responding to the first sliding operation, and controlling the virtual object to move from a first position of the virtual reality scene to a second position, wherein the first position is a position corresponding to a first visual field, and the second position is a position corresponding to a second visual field.
In one possible implementation manner, the control unit 1902 is specifically configured to:
If the first sliding operation is acquired, responding to the first sliding operation, and sending an arc-shaped transmission line along a target transmitting direction by taking a third position of the interaction controller in the virtual reality scene as a starting point, wherein the arc-shaped transmission line intersects with the ground of the virtual reality scene;
and taking an intersection point of the arc-shaped transmission line and the ground of the virtual reality scene as the second position, and controlling the virtual object to move from the first position to the second position.
In one possible implementation manner, the touch gesture operation further includes a touch operation, and the control unit 1902 is specifically configured to:
If the first sliding operation is acquired, and a continuous touch operation is acquired when the first sliding operation is completed, responding to the first sliding operation, and sending out the arc-shaped transmission line along the target transmission direction by taking a third position of the interaction controller in the virtual reality scene as a starting point;
The intersecting point of the arc-shaped transmission line and the ground of the virtual reality scene is taken as the second position, and the virtual object is controlled to move from the first position to the second position, and the method comprises the following steps:
When the touch operation is finished, the arc-shaped transmission line is controlled to disappear, an intersection point of the arc-shaped transmission line and the ground of the virtual reality scene is taken as the second position, and the virtual object is controlled to move from the first position to the second position.
In a possible implementation manner, the apparatus further includes an updating unit:
And the updating unit is used for updating the starting point and the target transmitting direction to update the arc-shaped transmission line if the third position of the interaction controller in the virtual reality scene changes in the continuous process of the touch operation.
In one possible implementation, the interaction controller is a wearable interaction controller.
In one possible implementation, the wearable interactive controller is in the form of a ring worn at the second joint of the index finger.
In a possible implementation manner, the device further includes a capturing unit, an identifying unit, and a determining unit:
the capturing unit is configured to capture a bare hand image before an arc transmission line is sent along a target transmission direction with a third position of the interaction controller in the virtual reality scene as a starting point in response to a first sliding operation if the touch gesture operation includes the first sliding operation;
the identification unit is used for carrying out image identification on the bare hand image to obtain a fourth position of the bare hand in the virtual reality scene;
The determining unit is used for determining a third position of the interaction controller in the virtual reality scene based on the relative position relation between the second joint of the index finger and the bare hand and the fourth position.
In a possible implementation manner, the interaction controller includes an inertial measurement unit, and the apparatus further includes an acquisition unit and a determination unit:
the acquisition unit is used for acquiring measurement data of the inertial measurement unit before the interaction controller starts to send an arc-shaped transmission line along the target transmission direction by taking a third position of the interaction controller in the virtual reality scene as a starting point in response to the first sliding operation;
The determining unit is used for converting the measurement data into quaternions and determining the target transmitting direction of the arc-shaped transmission line based on the quaternions.
In a possible implementation manner, the control unit 1902 is further configured to:
And when the arc-shaped transmission line is intersected with the ground of the virtual reality scene, controlling the interaction controller to send out first prompt information.
In one possible implementation manner, the touch gesture operation includes a second sliding operation, and the control unit 1902 is specifically configured to:
And if the second sliding operation aiming at the interaction controller is acquired, controlling the virtual object to rotate by a target angle according to the sliding direction of the second sliding operation.
In a possible implementation manner, the control unit 1902 is further configured to:
and when the virtual object rotates the target angle, controlling the interaction controller to send out second prompt information.
In a possible implementation manner, the interaction controller includes a photo-electric sensing module, and the obtaining unit is further configured to:
Acquiring photoelectric sensing data from a photoelectric sensing module of the interaction controller;
and determining and acquiring the touch gesture operation according to the photoelectric sensing data.
According to the technical scheme, the first visual field picture of the virtual object can be displayed, and the first visual field picture comprises the content observed by the virtual object in the first visual field in the virtual reality scene. When a user needs to control a virtual object to move in a virtual reality scene so that the user immersively feels that the user moves, the user can input touch gesture operation through the interaction controller. Because the touch gesture operation only needs the user to touch the interactive controller, the user does not need to exert effort, thereby saving effort and greatly reducing the physical power consumption of the user. And if touch gesture operation aiming at the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises the content of the virtual object observed in the virtual reality scene in a second view field, and the second view field is a view field obtained after the virtual object moves. According to the application, the interactive controller is used as the input equipment for touch gesture operation, the touch gesture operation can be directly received, and then the motion control instruction for controlling the motion of the virtual object is triggered, so that the gesture is not required to be recognized based on a visual algorithm, the response flow of motion control is greatly simplified, compared with the gesture recognition response speed, the high-efficiency execution of the motion control instruction is effectively ensured, and the experience fluency is improved. In addition, because the gesture is not required to be recognized based on a visual algorithm, the computing resource and the electricity consumption can be saved, and uncomfortable feeling of a user caused by heat dissipation of interaction equipment of a virtual scene can be effectively reduced.
The embodiment of the application also provides the interaction equipment of the virtual scene, and the interaction equipment of the virtual scene can execute the interaction method of the virtual scene. The interaction device of the virtual scene may be, for example, a terminal, taking the terminal as a smart phone as an example:
Fig. 20 is a block diagram showing a part of a structure of a smart phone according to an embodiment of the present application. Referring to fig. 20, a smart phone includes: radio Frequency (r.f. Frequency) circuit 2010, memory 2020, input unit 2030, display unit 2040, sensor 2050, audio circuit 2060, wireless fidelity (r.f. WiFi) module 2070, processor 2080, and power supply 2090. The input unit 2030 may include a touch panel 2031 and other input devices 2032, the display unit 2040 may include a display panel 2041, and the audio circuit 2060 may include a speaker 2061 and a microphone 2062. It will be appreciated that the smartphone structure shown in fig. 20 is not limiting of the smartphone, and may include more or fewer components than shown, or may combine certain components, or may be arranged in a different arrangement of components.
The memory 2020 may be used for storing software programs and modules, and the processor 2080 executes various functional applications and data processing of the smartphone by executing the software programs and modules stored in the memory 2020. The memory 2020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebooks, etc.) created according to the use of the smart phone, etc. In addition, memory 2020 may include high-speed random access memory and may also include non-volatile memory such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 2080 is a control center of the smart phone, connects various parts of the entire smart phone using various interfaces and lines, and performs various functions and processes data of the smart phone by running or executing software programs and/or modules stored in the memory 2020, and invoking data stored in the memory 2020. Optionally, the processor 2080 may include one or more processing units; preferably, the processor 2080 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 2080.
In this embodiment, the processor 2080 in the smart phone may perform the following steps:
Displaying a first view field picture of a virtual object, wherein the first view field picture comprises content observed by the virtual object in a first view field in a virtual reality scene;
And if touch gesture operation aiming at the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises content observed by the virtual object in the second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
The interaction device of the virtual scenario provided in the embodiment of the present application may also be a server, as shown in fig. 21, fig. 21 is a block diagram of a server 2100 provided in the embodiment of the present application, where the server 2100 may have a relatively large difference due to different configurations or performances, and may include one or more processors, such as a central processing unit (Central Processing Units, abbreviated as CPU) 2122, and a memory 2132, one or more storage media 2130 (such as one or more mass storage devices) storing application programs 2142 or data 2144. Wherein the memory 2132 and the storage medium 2130 may be transient storage or persistent storage. The program stored in the storage medium 2130 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 2122 may be configured to communicate with a storage medium 2130 and execute a series of instruction operations in the storage medium 2130 on the server 2100.
The Server 2100 can also include one or more power supplies 2126, one or more wired or wireless network interfaces 2150, one or more input/output interfaces 2158, and/or one or more operating systems 2141, such as Windows Server TM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM, and the like.
In the present embodiment, the steps performed by the central processor 2122 in the server 2100 described above may be implemented based on the structure shown in fig. 21.
According to an aspect of the present application, there is provided a computer-readable storage medium for storing program code for executing the virtual scene interaction method according to the foregoing embodiments.
According to one aspect of the present application, there is provided a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the interaction device of the virtual scene reads the computer program from the computer readable storage medium, and the processor executes the computer program, so that the interaction device of the virtual scene performs the methods provided in the various alternative implementations of the above embodiments.
The descriptions of the processes or structures corresponding to the drawings have emphasis, and the descriptions of other processes or structures may be referred to for the parts of a certain process or structure that are not described in detail.
The terms "first," "second," "third," "fourth," and the like in the description of the application and in the above figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, which is stored in a storage medium, and includes several instructions for causing an interaction device (which may be a computer, a server, or a network device, etc.) of a virtual scene to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (18)

1. A method of interacting with a virtual scene, the method comprising:
Displaying a first view field picture of a virtual object, wherein the first view field picture comprises content observed by the virtual object in a first view field in a virtual reality scene;
And if touch gesture operation aiming at the interaction controller is acquired, responding to the touch gesture operation, controlling the virtual object to move in the virtual reality scene, and displaying a second view field picture of the virtual object, wherein the second view field picture comprises content observed by the virtual object in the second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
2. The method of claim 1, wherein the touch gesture operation comprises a first swipe operation, and wherein if a touch gesture operation for an interactive controller is acquired, controlling the virtual object to move in the virtual reality scene in response to the touch gesture operation comprises:
And if the first sliding operation for the interactive controller is acquired, responding to the first sliding operation, and controlling the virtual object to move from a first position of the virtual reality scene to a second position, wherein the first position is a position corresponding to a first visual field, and the second position is a position corresponding to a second visual field.
3. The method of claim 2, wherein if the first sliding operation for the interactive controller is acquired, controlling the virtual object to move from the first position to the second position of the virtual reality scene in response to the first sliding operation comprises:
If the first sliding operation is acquired, responding to the first sliding operation, and sending an arc-shaped transmission line along a target transmitting direction by taking a third position of the interaction controller in the virtual reality scene as a starting point, wherein the arc-shaped transmission line intersects with the ground of the virtual reality scene;
and taking an intersection point of the arc-shaped transmission line and the ground of the virtual reality scene as the second position, and controlling the virtual object to move from the first position to the second position.
4. The method of claim 3, wherein the touch gesture operation further comprises a touch operation, and the responding to the first sliding operation to send out an arc-shaped transmission line along a target transmission direction with a third position of the interaction controller in the virtual reality scene as a starting point if the first sliding operation is acquired comprises:
If the first sliding operation is acquired, and a continuous touch operation is acquired when the first sliding operation is completed, responding to the first sliding operation, and sending out the arc-shaped transmission line along the target transmission direction by taking a third position of the interaction controller in the virtual reality scene as a starting point;
The intersecting point of the arc-shaped transmission line and the ground of the virtual reality scene is taken as the second position, and the virtual object is controlled to move from the first position to the second position, and the method comprises the following steps:
When the touch operation is finished, the arc-shaped transmission line is controlled to disappear, an intersection point of the arc-shaped transmission line and the ground of the virtual reality scene is taken as the second position, and the virtual object is controlled to move from the first position to the second position.
5. The method according to claim 4, wherein the method further comprises:
And in the continuous process of the touch operation, if the third position of the interaction controller in the virtual reality scene changes, updating the starting point and the target transmitting direction so as to update the arc-shaped transmission line.
6. The method of any one of claims 1-5, wherein the interactive controller is a wearable interactive controller.
7. The method of claim 6, wherein the wearable interactive controller is in the form of a ring worn at the second joint of the index finger.
8. The method of claim 7, wherein if the touch gesture operation includes a first swipe operation, before issuing an arc-shaped transmission line along a target transmission direction with a third position of the interactive controller in the virtual reality scene as a starting point in response to the first swipe operation, the method further comprises:
capturing a bare hand image;
performing image recognition on the bare hand image to obtain a fourth position of the bare hand in the virtual reality scene;
And determining a third position of the interaction controller in the virtual reality scene based on the relative position relation between the second joint of the index finger and the bare hand and the fourth position.
9. A method according to claim 3, wherein the interactive controller comprises an inertial measurement unit, the method further comprising, prior to issuing an arcuate transmission line along a target transmit direction with a third position of the interactive controller in the virtual reality scene as a starting point in response to the first sliding operation:
acquiring measurement data of the inertial measurement unit;
and converting the measurement data into quaternions, and determining the target emission direction of the arc-shaped transmission line based on the quaternions.
10. The method according to any one of claims 3-5, further comprising:
And when the arc-shaped transmission line is intersected with the ground of the virtual reality scene, controlling the interaction controller to send out first prompt information.
11. The method of claim 1, wherein the touch gesture operation comprises a second sliding operation, and wherein if a touch gesture operation for an interactive controller is acquired, controlling the virtual object to move in the virtual reality scene in response to the touch gesture operation comprises:
And if the second sliding operation aiming at the interaction controller is acquired, controlling the virtual object to rotate by a target angle according to the sliding direction of the second sliding operation.
12. The method of claim 11, wherein the method further comprises:
and when the virtual object rotates the target angle, controlling the interaction controller to send out second prompt information.
13. The method according to any one of claims 1-5, wherein the interactive controller includes a photo-sensing module, and the acquiring a touch gesture operation for the interactive controller includes:
Acquiring photoelectric sensing data from a photoelectric sensing module of the interaction controller;
and determining and acquiring the touch gesture operation according to the photoelectric sensing data.
14. An interactive system of a virtual scene is characterized in that the system comprises an interactive device of the virtual scene and an interactive controller, and the interactive device of the virtual scene and the interactive controller are connected through a network:
The interaction equipment of the virtual scene is used for displaying a first view field picture of the virtual object, wherein the first view field picture comprises content observed by the virtual object in a first view field in the virtual reality scene;
The interaction controller is used for inputting touch gesture operation;
And the interaction device of the virtual scene is further configured to, if a touch gesture operation for the interaction controller is acquired, respond to the touch gesture operation, control the virtual object to move in the virtual reality scene, and display a second view field picture of the virtual object, where the second view field picture includes content observed by the virtual object in a second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
15. An interaction device for a virtual scene, the device comprising a display unit and a control unit:
the display unit is used for displaying a first view field picture of the virtual object, wherein the first view field picture comprises contents observed by the virtual object in a first view field in a virtual reality scene;
The control unit is used for responding to the touch gesture operation if the touch gesture operation aiming at the interaction controller is acquired, and controlling the virtual object to move in the virtual reality scene;
The display unit is further configured to display a second view field picture of the virtual object, where the second view field picture includes content observed by the virtual object in a second view field in the virtual reality scene, and the second view field is a view field obtained after the virtual object moves.
16. An interactive device for a virtual scene, the device comprising a processor and a memory:
The memory is used for storing program codes and transmitting the program codes to the processor;
The processor is configured to perform the method of any of claims 1-13 according to instructions in the program code.
17. A computer readable storage medium for storing program code which, when executed by a processor, causes the processor to perform the method of any of claims 1-13.
18. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the method of any of claims 1-13.
CN202310152989.7A 2023-02-08 2023-02-08 Virtual scene interaction method and related device Pending CN118466741A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310152989.7A CN118466741A (en) 2023-02-08 2023-02-08 Virtual scene interaction method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310152989.7A CN118466741A (en) 2023-02-08 2023-02-08 Virtual scene interaction method and related device

Publications (1)

Publication Number Publication Date
CN118466741A true CN118466741A (en) 2024-08-09

Family

ID=92161232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310152989.7A Pending CN118466741A (en) 2023-02-08 2023-02-08 Virtual scene interaction method and related device

Country Status (1)

Country Link
CN (1) CN118466741A (en)

Similar Documents

Publication Publication Date Title
US11112856B2 (en) Transition between virtual and augmented reality
JP7411133B2 (en) Keyboards for virtual reality display systems, augmented reality display systems, and mixed reality display systems
US10671239B2 (en) Three dimensional digital content editing in virtual reality
CN110603509B (en) Joint of direct and indirect interactions in a computer-mediated reality environment
US11294475B1 (en) Artificial reality multi-modal input switching model
JP2023515525A (en) Hand Gesture Input for Wearable Systems
US20200409532A1 (en) Input device for vr/ar applications
KR101546654B1 (en) Method and apparatus for providing augmented reality service in wearable computing environment
CN117032519A (en) Apparatus, method and graphical user interface for interacting with a three-dimensional environment
CN106873767B (en) Operation control method and device for virtual reality application
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
JP6601402B2 (en) Control device, control method and program
KR102021851B1 (en) Method for processing interaction between object and user of virtual reality environment
TW202105129A (en) Artificial reality systems with personal assistant element for gating user interface elements
WO2018196552A1 (en) Method and apparatus for hand-type display for use in virtual reality scene
CN106648038A (en) Method and apparatus for displaying interactive object in virtual reality
Alshaal et al. Enhancing virtual reality systems with smart wearable devices
JP6220937B1 (en) Information processing method, program for causing computer to execute information processing method, and computer
CN110717993A (en) Interaction method, system and medium of split type AR glasses system
Lee et al. Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality
KR101962464B1 (en) Gesture recognition apparatus for functional control
CN113467625A (en) Virtual reality control device, helmet and interaction method
CN118466741A (en) Virtual scene interaction method and related device
JP2022153476A (en) Animation creation system
WO2024131405A1 (en) Object movement control method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication