CN108038726B - Article display method and device - Google Patents

Article display method and device Download PDF

Info

Publication number
CN108038726B
CN108038726B CN201711306315.9A CN201711306315A CN108038726B CN 108038726 B CN108038726 B CN 108038726B CN 201711306315 A CN201711306315 A CN 201711306315A CN 108038726 B CN108038726 B CN 108038726B
Authority
CN
China
Prior art keywords
target
display
article
item
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711306315.9A
Other languages
Chinese (zh)
Other versions
CN108038726A (en
Inventor
王小尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201711306315.9A priority Critical patent/CN108038726B/en
Publication of CN108038726A publication Critical patent/CN108038726A/en
Application granted granted Critical
Publication of CN108038726B publication Critical patent/CN108038726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an article display method and device, belonging to the field of virtual reality, wherein the method comprises the following steps: displaying a VR item display scene, the VR item display scene including a main display area and a three-dimensional model of at least one item; receiving a selection signal for a target item in a VR item display scene; and moving the target three-dimensional model corresponding to the target object to a main display area of the VR object display scene for display according to the selection signal. The embodiment of the disclosure can simulate the viewing of each article in a real article display scene, and can select the designated article to view according to the user requirement, compared with the article display with a plan view, the article display with a three-dimensional model can show more article information, thereby achieving a better display effect.

Description

Article display method and device
Technical Field
The embodiment of the disclosure relates to the field of Virtual Reality (VR), and in particular, to an article display method and apparatus.
Background
With the continuous aging of various products, manufacturers need to popularize new products, and in order to achieve a better popularization effect, product structures and functions need to be displayed for consumers at first, but due to the limitation of sites and the number of products, product objects cannot be provided for each consumer to be displayed.
In order to solve the above problem, a picture list corresponding to a product may be stored in a terminal having a picture display function, where the picture list includes three views (a front view, a top view, and a left view), a three-dimensional perspective view, a structure detail view, and the like of each product. Furthermore, the terminal displays a corresponding product picture according to the selection signal of the user, so that the user can know the related information of the product from the product picture.
Disclosure of Invention
The embodiment of the disclosure provides an article display method and device, and the technical scheme is as follows:
in a first aspect, there is provided a method of displaying an article, the method comprising:
displaying a VR item display scene including a main display area and a three-dimensional model of at least one item;
receiving a selection signal for a target item in the VR item display scene;
and moving the target three-dimensional model corresponding to the target object to the main display area of the VR object display scene for display according to the selection signal.
Optionally, the moving the target three-dimensional model corresponding to the target item to the main display area of the VR item display scene for displaying includes:
acquiring the area coordinates of the main display area;
amplifying the target three-dimensional model;
and moving the amplified target three-dimensional model to the main display area for display according to the area coordinates.
Optionally, after the target three-dimensional model corresponding to the target item is moved to the main display area of the VR item display scene for display, the method further includes:
receiving a rotation signal for the target item;
reading a rotation action file corresponding to the target object according to the rotation direction indicated by the rotation signal, wherein at least two rotation action files corresponding to the target object are stored in the VR equipment, and different rotation action files correspond to different rotation directions;
and controlling the target three-dimensional model to rotate according to the rotation action file.
Optionally, after the target three-dimensional model corresponding to the target item is moved to the main display area of the VR item display scene for display, the method further includes:
receiving a first operation signal to the target object;
reading an explosion action file corresponding to the target object according to the first operation signal, wherein the explosion action file is used for controlling the target object to be decomposed into parts;
and controlling the target three-dimensional model to be decomposed into a plurality of part models according to the explosion action file.
Optionally, after the target three-dimensional model corresponding to the target item is moved to the main display area of the VR item display scene for display, the method further includes:
receiving a second operation signal for the target item;
when the second operation signal is received, displaying an operation object three-dimensional model;
reading an interactive action file corresponding to the target object according to the second operation signal, wherein the interactive action file is used for controlling an operation object to interact with the target object;
and controlling the three-dimensional model of the operation object to interact with the target three-dimensional model according to the interaction action file, wherein the interaction mode comprises wearing interaction and using interaction.
Optionally, the receiving a selection signal of a target item in the VR item display scene includes:
displaying a selection cursor corresponding to a VR input device in the VR article display scene;
determining the object pointed by the selection cursor as the target object;
when a control signal sent by the VR input device is received, determining the control signal as the selection signal of the target item.
Optionally, the receiving a selection signal of a target item in the VR item display scene includes:
acquiring an eyeball motion track;
if the eyeball motion track points to an article and the stay time is longer than a threshold value, determining the article pointed by the eyeball motion track as the target article;
when a control signal sent by a VR input device is received, the control signal is determined to be the selection signal of the target item.
Optionally, after moving the target three-dimensional model corresponding to the target item to the main display area of the VR item display scene for display according to the selection signal, the method further includes:
when a third operation signal to the target object is received, the target three-dimensional model is moved to the original display position of the target object.
In a second aspect, there is provided an article display apparatus, the apparatus comprising:
a first display module configured to display a virtual display VR item display scenario, the VR item display scenario including a main display area and a three-dimensional model of at least one item;
a first receiving module configured to receive a selection signal for a target item in the VR item display scene;
and the display module is configured to move the target three-dimensional model corresponding to the target object to the main display area of the VR object display scene for display according to the selection signal.
Optionally, the display module includes:
a coordinate acquisition unit configured to acquire region coordinates of the main display region;
an amplification unit configured to perform amplification processing on the target three-dimensional model;
and the display unit is configured to move the amplified target three-dimensional model to the main display area for display according to the area coordinates.
Optionally, the apparatus further includes:
a second receiving module configured to receive a rotation signal for the target item;
a first reading module, configured to read a rotation motion file corresponding to the target object according to a rotation direction indicated by the rotation signal, where at least two rotation motion files corresponding to the target object are stored in the VR device, and different rotation motion files correspond to different rotation directions;
a first control module configured to control the target three-dimensional model to rotate according to the rotation motion file.
Optionally, the apparatus further comprises:
a third receiving module configured to receive a first operation signal for the target item;
the second reading module is configured to read an explosion action file corresponding to the target object according to the first operation signal, wherein the explosion action file is used for controlling the target object to be decomposed into parts;
and the second control module is configured to control the target three-dimensional model to be decomposed into a plurality of part models according to the explosion action file.
Optionally, the apparatus further comprises:
a fourth receiving module configured to receive a second operation signal for the target item;
a display module configured to display the three-dimensional model of the operation object when the second operation signal is received;
a third reading module, configured to read an interaction action file corresponding to the target object according to the second operation signal, where the interaction action file is used to control an operation object to interact with the target object;
and the third control module is configured to control the operation object three-dimensional model to interact with the target three-dimensional model according to the interaction action file, wherein the interaction mode comprises wearing interaction and using interaction.
Optionally, the first receiving module is configured to:
displaying a selection cursor corresponding to a VR input device in the VR article display scene;
determining the object pointed by the selection cursor as the target object;
when a control signal sent by the VR input device is received, determining the control signal as the selection signal of the target item.
Optionally, the first receiving module is configured to:
acquiring an eyeball motion track;
if the eyeball motion track points to an article and the stay time is longer than a threshold value, determining the article pointed by the eyeball motion track as the target article;
when a control signal sent by a VR input device is received, the control signal is determined to be the selection signal of the target item.
Optionally, the apparatus further comprises:
the resetting module is configured to move the target three-dimensional model to the original display position of the target item when receiving a third operation signal to the target item.
In a third aspect, there is provided an article display apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
displaying a VR item display scene including a main display area and a three-dimensional model of at least one item;
receiving a selection signal for a target item in the VR item display scene;
and moving the target three-dimensional model corresponding to the target object to the main display area of the VR object display scene for display according to the selection signal.
In a fourth aspect, there is provided a computer readable medium having stored thereon program instructions which, when executed by a processor, implement the article presentation method as described in the first aspect above.
The technical scheme provided by the embodiment of the disclosure has the following beneficial effects:
in the embodiment of the disclosure, by constructing the VR article display scene and the three-dimensional model of each article in advance, when a user uses VR equipment, the user can simulate to view each article under the real article display scene, and can select a specific article to view according to the self requirement.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a VR system provided by an exemplary embodiment of the present disclosure;
FIG. 2 is a flow chart of an item display method provided by an exemplary embodiment of the present disclosure;
fig. 3 is a schematic diagram of a VR item display scenario provided by an exemplary embodiment of the present disclosure;
FIG. 4 is a flow chart of an item display method provided by another exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a VR input device provided by an exemplary embodiment of the present disclosure;
FIG. 6 is a flow chart of an item display method provided by another exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an implementation of a process for decomposing a three-dimensional model of an object into component models;
FIG. 8 is a flow chart of an item display method provided by another exemplary embodiment of the present disclosure;
FIG. 9 is a schematic diagram of an implementation of a process for interaction of an operation object with a target object;
FIG. 10 is a flow chart of an item display method provided by another exemplary embodiment of the present disclosure;
FIG. 11 is a block diagram of an article display device provided in accordance with an exemplary embodiment of the present disclosure;
fig. 12 is a block diagram of an article display device shown in accordance with an exemplary embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Please refer to fig. 1, which illustrates a schematic structural diagram of a VR system according to an embodiment of the present disclosure. The VR system includes: VR host 120, display device 140, and VR input device 160.
The VR host 120 is used for modeling a three-dimensional virtual environment, generating a three-dimensional display screen corresponding to the three-dimensional virtual environment, generating a virtual object in the three-dimensional virtual environment, and the like. Certainly, the VR host 120 may also model a two-dimensional virtual environment, generate a two-dimensional display frame corresponding to the two-dimensional virtual environment, and generate a virtual object in the two-dimensional virtual environment; alternatively, the VR host 120 may model a three-dimensional virtual environment, generate a two-dimensional display screen corresponding to the three-dimensional virtual environment according to the viewing position of the user, generate a two-dimensional projection screen of a virtual object in the three-dimensional virtual environment, and the like, which is not limited in this embodiment.
Alternatively, VR host 120 may be integrated within display device 140 or integrated in another device different from display device 140. Wherein the other device may be a desktop computer or a server, etc.
The VR host 120 is configured to receive an input signal of the VR input device 160, and display a selection cursor corresponding to the input device in the three-dimensional virtual environment according to the input signal, where the selection cursor may be an icon such as an arrow, a cross, or a virtual hand.
VR host 120 is typically implemented by a processor, memory, image VR host, etc. electronics disposed on a circuit board. Optionally, the VR host 120 further includes an image capturing device for capturing the head movement of the user and changing the display in the display device 140 according to the head movement of the user.
The display device 140 is a display for wearing on the head of the user to display images. The display device 140 generally includes a wearing portion including temples and an elastic band for wearing the display device 140 on the head of a user, and a display portion including a left-eye display screen and a right-eye display screen. Optionally, the display device 140 may display different images on the left-eye display screen and the right-eye display screen, so as to simulate a three-dimensional virtual environment for the user; or directly display an environment screen of the three-dimensional virtual environment generated by the VR host 120. In this embodiment, an example is described in which the display device 140 directly displays an environment picture of a three-dimensional virtual environment generated by the VR host 120, where the environment picture shows a VR item display scene.
Optionally, a motion sensor is disposed on the display device 140 for capturing head movements of the user, so that the VR host 120 changes the environment picture displayed in the display device 140 according to the head movements of the user.
The display device 140 is electrically connected to the VR host 120 through a flexible circuit board or a hardware interface or a data line or a wireless network.
The VR input device 160 is an input peripheral for controlling virtual objects in the three dimensional virtual environment. The device can be at least one of a somatosensory glove, a somatosensory handle, a remote controller, a treadmill, a mouse, a keyboard and a human eye focusing device. VR input device 160 typically includes physical keys for enabling activation and/or deactivation of the input device, for enabling detection of whether a user is holding the input device, for invoking a menu bar, etc., and the embodiments are not further listed here.
Optionally, part or all of the physical keys may be implemented as virtual keys implemented by a touch screen, which is not limited in this embodiment.
Optionally, a motion sensor is disposed on the VR input device 160, and is configured to acquire a motion state of the VR input device 160, and send the motion state to the VR host 120 in the form of sensor data, so that the VR host 120 adjusts a position of a cursor of the input device according to the sensor data. The motion sensor may be any one of an acceleration sensor and an angular velocity sensor, and the number of each type of motion sensor may be one or more, which is not limited in this embodiment.
The VR input device 160 is connected to the VR host 120 via cable, Bluetooth, or Wi-Fi (Wireless-Fidelity).
For convenience of description, the article display method provided by the various embodiments of the present disclosure is performed by a VR device in which the VR host 120 and the display device 140 shown in fig. 1 are integrated.
Fig. 2 is a flowchart of an article display method according to an exemplary embodiment of the present disclosure. This embodiment is exemplified by applying the method to a VR device, and the article display method includes the following steps:
in step 201, a VR item display scene is displayed, the VR item display scene including a main display area and a three-dimensional model of at least one item.
Optionally, the main display area is located in a central area of the VR item display scene and used for displaying an object selected by a user, and the three-dimensional models of the items are disposed in two side areas outside the central area.
When the VR equipment is in an open state and receives an article display request, the VR equipment reads a VR article display scene and three-dimensional model data of an article and renders according to the three-dimensional model data, so that the VR article display scene provided with at least one article is displayed.
In this embodiment, a three-dimensional model of each article (to be displayed) and a scene three-dimensional model of a VR article display scene are pre-stored in the VR device, where each article corresponds to its display position in the VR article display scene.
Optionally, the VR device establishes a 3dmax model corresponding to the article according to the article contour data acquired during the article scanning, and presets display position information of the 3dmax model in the VR article display scene. When article display is carried out, VR equipment firstly renders a VR article display scene, and then displays a 3dmax model corresponding to the article on a corresponding position in the VR article display scene according to display position information of each article.
In one possible implementation, the VR item display scenario is established based on a usage scenario of the item. For example, when the article is a home article, a VR article display scene with the home scene as a background is pre-constructed, and a display position of the article in the VR article display scene is set according to a common position of the article in the home scene.
Optionally, the three-dimensional model of each article in the VR article display scene also displays article information corresponding to the article in a suspended manner, where the article information includes an article name, a model, a function, and the like.
Schematically, as shown in fig. 3, a three-dimensional model 311 corresponding to a table lamp and item information 312 are displayed in the VR item display scene 31.
In step 202, a selection signal for a target item in a VR item display scene is received.
After the VR object display scene is checked through the VR equipment, the user selects the target object in the VR object display scene according to the display requirement, and correspondingly, the VR equipment receives the selection signal of the target object.
In a possible implementation manner, the VR device is connected to the VR input device, when a user needs to select a target item in a VR item display scene, the user performs a selection operation through the VR input device, and correspondingly, the VR device receives a selection signal sent by the VR input device.
In step 203, the target three-dimensional model corresponding to the target item is moved to a main display area of a VR item display scene for display according to the selection signal.
In order to achieve a better display effect, the VR equipment enlarges a target three-dimensional model corresponding to a target article, and then moves the enlarged target three-dimensional model from an original display position to a main display area for display.
Optionally, the step includes the following steps:
firstly, obtaining the area coordinates of the main display area.
In a possible implementation manner, a spatial rectangular coordinate system of a VR article display scene is established in advance, and when a selection signal is received, the VR device acquires region coordinates of a main display region in the VR article display scene.
Illustratively, as shown in fig. 3, a spatial rectangular coordinate system is established based on the VR goods display scene, and the region coordinates obtained by the VR device to the main display region are (250,225,175).
And secondly, amplifying the target three-dimensional model.
In one possible implementation, the VR device zooms in on the target three-dimensional model of the original first size to a second size according to a predetermined zoom ratio. And the amplified target three-dimensional model with the second size is matched with the main display area.
For example, the VR device performs an enlargement process on the original 20 × 20 × 60 three-dimensional model of the object according to a predetermined enlargement ratio, so as to obtain a 60 × 60 × 180 three-dimensional model of the object.
And thirdly, moving the amplified target three-dimensional model to a main display area for display according to the area coordinates.
Further, the VR equipment moves the amplified target three-dimensional model to a main display area for display. Illustratively, as shown in fig. 3, when the VR device receives a selection signal for the three-dimensional model 311 corresponding to the desk lamp, the VR enlarges the three-dimensional model 311, and moves the enlarged three-dimensional model 311 to the main display area according to the coordinates of the center position.
Optionally, the VR item display scene further includes a virtual character model, after receiving the selection signal, the VR device controls the virtual character model to move to the target item according to the position of the target item, and plays a prestored animation for picking up the target item, and after the animation is played, the VR device performs the step of displaying the target item in an enlarged manner, so as to add interest to item display.
In summary, in the embodiment of the present disclosure, by constructing the VR item display scene and the three-dimensional models of the items in advance, when the user uses the VR device, the user can simulate to view the items in the real item display scene, and can select the designated items according to the needs of the user to view the items.
Fig. 4 is a flowchart of an article display method according to an exemplary embodiment of the present disclosure. This embodiment is exemplified by applying the method to a VR device, and the article display method includes the following steps:
in step 401, a VR item display scene is displayed, the VR item display scene including a main display area and a three-dimensional model of at least one item.
The implementation of this step is similar to step 201, and this embodiment is not described herein again.
In step 402, a selection signal for a target item in a VR item display scene is received.
Alternatively, there are two ways of selecting the target item. In a first implementation manner, a VR device is connected with a VR input device, the VR device determines a target object according to the position of a selection cursor corresponding to the VR input device, and selects the target object through the VR input device; in a second implementation manner, the VR device has an eyeball trajectory tracking function, determines a target object according to the eyeball motion trajectory, and selects the target object through the VR input device.
For the first implementation, the method includes the following steps:
step one, displaying a selection cursor corresponding to VR input equipment in a VR article display scene.
Wherein the VR input device may be the VR input device 160 as shown in fig. 1. During the operation of the VR equipment, the VR input equipment collects motion data through the motion sensor and sends the motion data to the VR equipment, and the VR equipment determines the motion state of the VR input equipment according to the motion data, so that the display position of a selection cursor corresponding to the VR input equipment in a VR article display scene is adjusted in real time. Wherein, the selection cursor can be an arrow, a cross or a virtual hand and other icons.
Illustratively, as shown in fig. 3, when the VR device is connected to the VR input device, a selection cursor 313 is displayed in the VR item display scene.
Determining the object pointed by the selection cursor as a target object;
further, the VR device determines an area pointed by the selection cursor, and determines the item in the area as the target item.
Optionally, the VR device highlights the target item pointed by the selection cursor to prompt the user of the currently selected target item.
And step three, when the control signal sent by the VR input equipment is received, determining the control signal as a selection signal of the target object.
And when the selected target object is determined, the user performs confirmation operation by using the VR input equipment.
In one possible implementation, the user makes the determination by pressing a predetermined key on the VR input device. When the VR input equipment receives the pressing operation of the preset key, the control signal is sent to the VR equipment, and correspondingly, after the VR equipment receives the control signal, the selection signal of the target object is determined to be received.
Illustratively, as shown in fig. 5, the confirmation key 511 on the VR input device 51 is bound to the function of selecting an item in advance, and when the user determines to select a target item, the confirmation key 511 can be pressed.
For the second implementation manner, the step includes the following steps:
step one, obtaining an eyeball motion track.
In one possible implementation, the VR device acquires image data of the eyes of the user in real time by using an infrared camera, extracts pupil data in the image data, and determines coordinates of the sight of the user projected to a VR article display scene. Further, the VR device determines an eye movement trajectory according to a coordinate set corresponding to the user's sight line obtained within a period of time, where the eye movement trajectory may be a trajectory curve determined according to the coordinate set.
And step two, if the eyeball motion track points to the object and the stay time is longer than the threshold value, determining the object pointed by the eyeball motion track as the target object.
Generally, when a user views an article, the line of sight may stay on the article, and accordingly, when the VR device determines that the eye movement trajectory is directed to the article, that is, it determines that the line of sight of the user stays on the article, it may be determined that the article is the target article.
Because the target object is determined only according to the eyeball motion track direction, the object is easy to be selected by mistake, when the eyeball motion track points to the object, the VR equipment records the stay time of the sight, and when the stay time is longer than the threshold value, the object pointed by the eyeball motion track is determined as the target object. Wherein the threshold may be 5 s.
In one possible implementation manner, when determining an article pointed by the eyeball trajectory, the VR device displays a countdown animation at a predetermined position in a VR article display scene to prompt a user to determine to select a current article after the countdown is finished.
Illustratively, as shown in fig. 3, when it is determined that the eyeball trajectory points to the three-dimensional model 311 corresponding to the table lamp, the countdown animation is displayed at the predetermined position 314.
And step three, when the control signal sent by the VR input equipment is received, determining the control signal as a selection signal of the target object.
And when the selected target object is determined, the user performs confirmation operation by using the VR input equipment.
In one possible implementation, the user makes the determination by pressing a predetermined key on the VR input device. When the VR input equipment receives the pressing operation of the preset key, the control signal is sent to the VR equipment, and correspondingly, after the VR equipment receives the control signal, the selection signal of the target object is determined to be received.
Illustratively, as shown in fig. 5, the confirmation key 511 on the VR input device 51 is bound to the function of selecting an item in advance, and when the user determines to select a target item, the confirmation key 511 can be pressed.
In this embodiment, only the two target object selection manners are described as an example, in other possible embodiments, the target object may be selected by voice control or pressing a physical key, and this embodiment is not limited thereto.
In step 403, according to the selection signal, the target three-dimensional model corresponding to the target item is moved to the main display area of the VR item display scene for display.
The implementation of this step is similar to that of step 203, and this embodiment is not described herein again.
In step 404, a rotation signal for the target item is received.
In order to facilitate the user to view the target object from various angles, the VR device further receives a rotation signal for the target object, and controls the target three-dimensional model of the target object to rotate according to the rotation signal.
In one possible implementation, the VR device is coupled to the VR input device, and the directional buttons of the VR input device are bound to the rotation function. When the pressing operation of the direction key is received, the VR input device sends a corresponding pressing signal to the VR device, and correspondingly, the VR device receives a rotating signal of the target object.
Illustratively, as shown in fig. 5, when receiving a selection signal from the user to the direction key 512, the VR input device sends a left rotation signal to the VR device, and accordingly, the VR device receives a left rotation signal to the target object.
In step 405, the rotation motion file corresponding to the target object is read according to the rotation direction indicated by the rotation signal, at least two rotation motion files corresponding to the target object are stored in the VR device, and different rotation motion files correspond to different rotation directions.
And after receiving the rotation signal, the VR equipment determines the rotation direction indicated by the rotation signal, and further determines how to control the target three-dimensional model to rotate according to the rotation direction.
In a possible implementation manner, the VR device stores a corresponding relationship between the rotation direction and the rotation motion file, and after the rotation direction is determined, the rotation motion file corresponding to the target object is determined according to the corresponding relationship, and then the rotation motion file is read. The rotating action files contain rotating instructions for controlling the target object to rotate corresponding to the target three-dimensional model, and the number of the rotating action files corresponding to different objects is different.
Illustratively, the correspondence relationship between the rotation direction and the rotation motion file stored in the VR device is shown in table one:
watch 1
Direction of rotation Rotating action file
To the left First rotating motion textPiece
To the right Second rotation motion file
Up Third rotation action file
Downwards facing Fourth rotation action file
For example, the VR device determines that the rotation direction indicated by the rotation signal is a left rotation, determines that the corresponding rotation motion file is a first rotation motion file according to the first table, and reads the first rotation motion file.
In step 406, the target three-dimensional model is controlled to rotate according to the rotation motion file.
Correspondingly, after the VR device reads the rotation action file, the target three-dimensional model is controlled to rotate according to the rotation instruction in the rotation action file.
In one possible implementation, the rotation instruction is used to instruct the target three-dimensional model to rotate at a predetermined rotation speed for a predetermined period of time, and the rotation process is based on a predetermined rotation axis and a predetermined rotation direction. For example, the predetermined rotation speed is 0.2r/s and the predetermined time period is 30 s. The direction of the rotation axis is related to the rotation direction, for example, the rotation axis is a vertical direction when the rotation direction is a left-right direction, and the rotation axis is a horizontal direction when the rotation direction is an up-down direction.
Illustratively, as shown in fig. 3, when the rotation command in the first rotation motion file is executed, the target three-dimensional model 311 rotates to the left around the rotation axis 321, the rotation speed is 0.2r/s, and the rotation time period is 30 s.
In other possible implementation manners, after the target three-dimensional model is displayed in an enlarged manner, the VR device receives the motion data sent by the VR input device, and determines the motion direction of the VR input device according to the motion data, so as to control the target three-dimensional model to rotate according to the motion direction, which is not limited in this embodiment.
In this embodiment, the VR device determines the target object selected by the user according to the pointing direction of the corresponding selection cursor of the VR input device, or according to the pointing direction of the eye movement track, and determines to receive the selection signal of the target object when receiving the control signal sent by the VR input device, so as to display the target object.
In this embodiment, the VR device can read the corresponding rotation action file according to the rotation signal sent by the VR input device, and then control the target three-dimensional model to rotate automatically according to the rotation instruction in the rotation action file, so that the automatic rotation display of the target object is realized, and the interaction convenience is further improved.
In a possible implementation manner, when a user needs to know the detailed structure of the target object, the VR device controls the target three-dimensional model corresponding to the target object to be decomposed into a plurality of part models and displayed according to the received operation signal. Illustratively, as shown in fig. 6, the step 403 may further include the following steps.
In step 407, a first operation signal for a target item is received.
In one possible implementation, the VR device is coupled to the VR input device, and the predetermined key of the VR input device is bound to the model decomposition function. After the target three-dimensional model is displayed in an amplifying mode, when a trigger signal of a user to a preset key is received, the VR input device sends a first operation signal to the VR device, and correspondingly, the VR device receives the first operation signal.
In step 408, according to the first operation signal, an explosion action file corresponding to the target object is read, where the explosion action file is used to control the target object to be decomposed into components.
In one possible implementation, an explosion action file corresponding to each article is stored in the VR device, and the explosion action file is used for controlling the decomposition of the three-dimensional model of the object into a plurality of part models, that is, for controlling the explosion decomposition of the whole three-dimensional model into a plurality of sub-models.
Correspondingly, after receiving the first operation signal, the VR device reads the explosion action file corresponding to the target object.
In step 409, the control target three-dimensional model is decomposed into a plurality of part models according to the explosion action file.
Furthermore, after the VR device reads the explosion action file, the VR device runs the explosion instruction in the explosion action file. Optionally, the explosion instruction includes a displacement direction and a displacement distance corresponding to each part model in the target three-dimensional model, and the corresponding VR device controls each part model in the target three-dimensional model to displace in different directions according to the explosion instruction, so as to show a process of decomposing the target object into a plurality of parts.
Illustratively, as shown in fig. 7, the target three-dimensional model 311 may be decomposed into a first component model 331, a second component model 332, and a third component model 333, where an explosion command in the explosion action file is used to control the first component model 331 to move 20cm upward, the second component model 332 to remain unchanged, and the third component model 333 to move 20cm downward. When the explosion action file is operated, the VR device controls the first part model 331 to move upwards by 20cm and the third part model 333 to move downwards by 20cm according to the explosion instruction.
In a possible implementation manner, after the VR device decomposes the target three-dimensional model into a plurality of component models, when the VR device receives the rotation signal of the target object again, the VR device controls the plurality of component models to rotate according to the rotation motion file, so that the user can view each component from different directions. The plurality of component models may rotate around the same rotation axis or may rotate around respective rotation axes.
In this embodiment, the VR device can read the corresponding explosion action file according to the first operation signal sent by the VR input device, so as to control the target three-dimensional model to be decomposed into a plurality of component models according to the explosion instruction in the explosion action file, thereby realizing the automatic display of the process of decomposing the target object into the components and improving the efficiency of the user for checking the object composition structure.
In one possible implementation manner, when the user needs to know the use manner of the target object, the VR device displays the use process of the target object according to the second operation signal triggered by the user.
Illustratively, as shown in fig. 8, the step 403 may further include the following steps:
in step 410, a second operational signal is received for the target item.
In this embodiment, the preset keys of the VR input device are bound to the use process display function. After the target three-dimensional model is displayed in an amplifying mode, when a trigger signal of a user to a preset key is received, the VR input device sends a second operation signal to the VR device, and correspondingly, the VR device receives the second operation signal.
In step 411, when the second operation signal is received, the three-dimensional model of the operation object is displayed.
And when the second operation signal is received, the VR device displays the three-dimensional model of the operation object in the VR article display scene. Wherein the three-dimensional model of the operation object can be one of a finger three-dimensional model, a palm three-dimensional model or a character object three-dimensional model.
Illustratively, as shown in fig. 9, when receiving the second operation signal, the VR device displays an operation object 341 in the VR article display scene.
In step 412, according to the second operation signal, an interaction file corresponding to the target object is read, where the interaction file is used to control the operation object to interact with the target object.
Correspondingly, in order to show the interaction process of the operation object and the target object, the VR device reads an interaction file corresponding to the target object, wherein the interaction file includes an interaction instruction for controlling the interaction between the operation object and the target object.
Optionally, for different target items, the interaction modes of the operation object and the target item are different, where the interaction modes include wearing interaction, using interaction, and the like.
For example, when the target object is a wearable device, the interaction manner between the operation object and the target object is wearing interaction (process of wearing the wearable device is shown); when the target object is the smart home device, the interaction mode of the operation object and the target object is the use interaction (the use method of using the smart home device is shown).
It should be noted that this step may be performed simultaneously with step 411 or before step 413, and the timing of performing step 411 and step 412 is not limited in this embodiment.
In step 413, the three-dimensional model of the operation object is controlled to interact with the target three-dimensional model according to the use action file.
After the VR equipment reads the interactive action file corresponding to the target object, the interactive instruction in the interactive action file is operated, and therefore the three-dimensional model of the operation object is controlled to interact with the target three-dimensional model.
Illustratively, as shown in fig. 9, the lighting mode of the target object table lamp is that the top of the table lamp is touched, and after the interactive instruction is run, the VR device displays the touch operation of the operation object 341 on the three-dimensional model 311 corresponding to the target object table lamp, and accordingly, the target object table lamp is lighted.
In this embodiment, the VR device may read the corresponding interaction file according to the second operation signal sent by the VR input device, display the three-dimensional model corresponding to the operation object, and then control the three-dimensional model of the operation object to interact with the target three-dimensional model according to the interaction instruction in the interaction file, thereby implementing automatic display of the use mode of the target object, and improving the efficiency of the user in knowing the use method of the object.
In one possible implementation manner, when the user finishes checking the target object and needs to check other objects, the VR device performs a reset operation on the target object according to the received operation signal. Illustratively, as shown in fig. 10, the step 403 further includes the following steps.
In step 414, when the third operation signal to the target object is received, the target three-dimensional model is moved to the original display position of the target object.
In one possible implementation, when receiving the third operation signal to the target item, the VR device first reduces the target three-dimensional model and then moves the reduced target three-dimensional model to the original display position of the target item.
Optionally, a preset key of the VR input device is bound to the reset function, when a trigger operation of the user on the preset key is received, the VR input device sends a third operation signal to the VR device, and correspondingly, the VR device receives the third operation signal and reduces the target three-dimensional model.
Illustratively, as shown in fig. 5, when the VR input device 51 receives a pressing operation of the reset key 513 by the user, the third operation signal is sent to the VR device, so that the VR device reduces the target three-dimensional model when receiving the third operation signal.
Furthermore, the VR device stores the coordinates of the original display position of each article, and after the target three-dimensional model is reduced, the VR device displays the reduced target three-dimensional model on the original display position corresponding to the target object.
Optionally, a reset action file for reducing and replacing the target three-dimensional model is stored in the VR device, and after receiving the third operation signal, the VR device reads the reset action file and executes a reset instruction in the reset action file, so that the target three-dimensional model is controlled to be reduced and moved to the original display position according to the reset instruction.
In this embodiment, the VR device reduces the amplified target three-dimensional model according to the received third operation signal, and displays the reduced target three-dimensional model at the original display position, so that the user continues to select to view three-dimensional models corresponding to other articles, the article display process is completed, reduction and resetting of the articles are controlled through the VR control device, and the interaction feeling of article display is improved.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Referring to fig. 11, a block diagram of an article display device provided by one embodiment of the present disclosure is shown. The apparatus may be implemented as all or part of a VR device in software, hardware, or a combination of both. The device includes: a first display module 1110, a first receiving module 1120, and a display module 1130.
A first display module 1110 configured to display a virtual display VR item display scenario, the VR item display scenario including a main display area and a three-dimensional model of at least one item;
a first receiving module 1120 configured to receive a selection signal for a target item in the VR item display scenario;
a display module 1130, configured to move the target three-dimensional model corresponding to the target item to the main display area of the VR item display scene for display according to the selection signal.
Optionally, the display module 1130 includes:
a coordinate acquisition unit configured to acquire region coordinates of the main display region;
an amplification unit configured to perform amplification processing on the target three-dimensional model;
and the display unit is configured to move the amplified target three-dimensional model to the main display area for display according to the area coordinates.
Optionally, the apparatus further includes:
a second receiving module configured to receive a rotation signal for the target item;
a first reading module, configured to read a rotation motion file corresponding to the target object according to a rotation direction indicated by the rotation signal, where at least two rotation motion files corresponding to the target object are stored in the VR device, and different rotation motion files correspond to different rotation directions;
a first control module configured to control the target three-dimensional model to rotate according to the rotation motion file.
Optionally, the apparatus further comprises:
a third receiving module configured to receive a first operation signal for the target item;
the second reading module is configured to read an explosion action file corresponding to the target object according to the first operation signal, wherein the explosion action file is used for controlling the target object to be decomposed into parts;
and the second control module is configured to control the target three-dimensional model to be decomposed into a plurality of part models according to the explosion action file.
Optionally, the apparatus further comprises:
a fourth receiving module configured to receive a second operation signal for the target item;
a display module configured to display the three-dimensional model of the operation object when the second operation signal is received;
a third reading module, configured to read an interaction action file corresponding to the target object according to the second operation signal, where the interaction action file is used to control an operation object to interact with the target object;
and the third control module is configured to control the operation object three-dimensional model to interact with the target three-dimensional model according to the interaction action file, wherein the interaction mode comprises wearing interaction and using interaction.
Optionally, the first receiving module 1120 is configured to:
displaying a selection cursor corresponding to a VR input device in the VR article display scene;
determining the object pointed by the selection cursor as the target object;
when a control signal sent by the VR input device is received, determining the control signal as the selection signal of the target item.
Optionally, the first receiving module 1120 is configured to:
acquiring an eyeball motion track;
if the eyeball motion track points to an article and the stay time is longer than a threshold value, determining the article pointed by the eyeball motion track as the target article;
when a control signal sent by a VR input device is received, the control signal is determined to be the selection signal of the target item.
Optionally, the apparatus further comprises:
the resetting module is configured to move the target three-dimensional model to the original display position of the target item when receiving a third operation signal to the target item.
In summary, in the embodiment of the present disclosure, by constructing the VR item display scene and the three-dimensional models of the items in advance, when the user uses the VR device, the user can simulate to view the items in the real item display scene, and can select the designated items according to the needs of the user to view the items.
In this embodiment, the VR device determines the target object selected by the user according to the pointing direction of the corresponding selection cursor of the VR input device, or according to the pointing direction of the eye movement track, and determines to receive the selection signal of the target object when receiving the control signal sent by the VR input device, so as to display the target object.
In this embodiment, the VR device can read the corresponding rotation action file according to the rotation signal sent by the VR input device, and then control the target three-dimensional model to rotate automatically according to the rotation instruction in the rotation action file, so that the automatic rotation display of the target object is realized, and the interaction convenience is further improved.
In this embodiment, the VR device can read the corresponding explosion action file according to the first operation signal sent by the VR input device, so as to control the target three-dimensional model to be decomposed into a plurality of component models according to the explosion instruction in the explosion action file, thereby realizing the automatic display of the process of decomposing the target object into the components and improving the efficiency of the user for checking the object composition structure.
In this embodiment, the VR device may read the corresponding interaction file according to the second operation signal sent by the VR input device, display the three-dimensional model corresponding to the operation object, and then control the three-dimensional model of the operation object to interact with the target three-dimensional model according to the interaction instruction in the interaction file, thereby implementing automatic display of the use mode of the target object, and improving the efficiency of the user in knowing the use method of the object.
In this embodiment, the VR device reduces the amplified target three-dimensional model according to the received third operation signal, and displays the reduced target three-dimensional model at the original display position, so that the user continues to select to view three-dimensional models corresponding to other articles, the article display process is improved, reduction and resetting of the articles are controlled by the VR device, and the interaction of article display is improved.
Fig. 12 is a block diagram illustrating an article display apparatus 1200 according to an exemplary embodiment. For example, the apparatus 1200 may be implemented as a VR device.
Referring to fig. 12, the apparatus 1200 may include one or more of the following components: processing component 1202, memory 1204, power component 1206, multimedia component 1208, audio component 1210, input/output (I/O) interface 1212, sensor component 1214, and communications component 1216.
The processing component 1202 generally controls overall operation of the apparatus 1200, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1202 may include one or more processors 1220 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1202 can include one or more modules that facilitate interaction between the processing component 1202 and other components. For example, the processing component 1202 can include a multimedia module to facilitate interaction between the multimedia component 1208 and the processing component 1202.
The memory 1204 is configured to store various types of data to support operation at the apparatus 1200. Examples of such data include instructions for any application or method operating on the device 1200, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1204 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 1206 provides power to the various components of the device 1200. Power components 1206 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for apparatus 1200.
The multimedia components 1208 include a screen that provides an output interface between the device 1200 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In this embodiment, the multimedia component 1208 comprises a front camera and a rear camera, wherein the front camera and the rear camera may be a fixed optical lens system or have a focal length and an optical zoom capability.
Audio component 1210 is configured to output and/or input audio signals. For example, audio component 1210 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1200 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1204 or transmitted via the communication component 1216. In some embodiments, audio assembly 1210 further includes a speaker for outputting audio signals.
The I/O interface 1212 provides an interface between the processing component 1202 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1214 includes one or more sensors for providing various aspects of state assessment for the apparatus 1200. For example, the sensor assembly 1214 may detect an open/closed state of the apparatus 1200, the relative positioning of the components, such as a display and keypad of the apparatus 1200, the sensor assembly 1214 may also detect a change in the position of the apparatus 1200 or a component of the apparatus 1200, the presence or absence of user contact with the apparatus 1200, orientation or acceleration/deceleration of the apparatus 1200, and a change in the temperature of the apparatus 1200. The sensor assembly 1214 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 1214 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1214 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1216 is configured to facilitate communications between the apparatus 1200 and other devices in a wired or wireless manner. The apparatus 1200 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1216 receives the broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1216 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1200 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as memory 1204 comprising instructions, executable by processor 1220 of apparatus 1200 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The embodiment of the present disclosure also provides a computer readable medium, on which program instructions are stored, and the program instructions, when executed by a processor, implement the article display method provided by the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. A method of displaying an item, the method comprising:
displaying a Virtual Reality (VR) article display scene, wherein the VR article display scene is established based on an article use scene, and the VR article display scene comprises a main display area, a three-dimensional model of at least one article and a virtual character model;
receiving a selection signal for a target item in the VR item display scene;
moving a target three-dimensional model corresponding to the target object to the main display area of the VR object display scene for display according to the selection signal, controlling the virtual character model to move to the target object according to the position of the target object, playing a prestored animation for picking up the target object, and displaying the target object in an enlarged manner after the animation is played;
the method further comprises the following steps:
controlling the target three-dimensional model corresponding to the target article to be decomposed into a plurality of part models according to the received first operation signal for the target article;
when a rotation signal for a target article is received, reading a rotation action file corresponding to the target article according to a rotation direction indicated by the rotation signal, wherein at least two rotation action files corresponding to the target article are stored in VR equipment, and different rotation action files correspond to different rotation directions;
and controlling the plurality of part models to rotate around the same rotating shaft or around the corresponding rotating shafts according to the rotating motion file.
2. The method of claim 1, wherein moving the target three-dimensional model corresponding to the target item to the main display area of the VR item display scene for display comprises:
acquiring the area coordinates of the main display area;
amplifying the target three-dimensional model;
and moving the amplified target three-dimensional model to the main display area for display according to the area coordinates.
3. The method of claim 1 or 2, wherein after moving the target three-dimensional model corresponding to the target item into the main display area of the VR item display scene for display, the method further comprises:
receiving a rotation signal for the target item;
reading a rotation action file corresponding to the target object according to the rotation direction indicated by the rotation signal, wherein at least two rotation action files corresponding to the target object are stored in the VR equipment, and different rotation action files correspond to different rotation directions;
and controlling the target three-dimensional model to rotate according to the rotation action file.
4. The method according to claim 1 or 2, wherein the controlling the target three-dimensional model corresponding to the target object to be decomposed into a plurality of part models according to the received first operation signal to the target object comprises:
reading an explosion action file corresponding to the target object according to the first operation signal, wherein the explosion action file is used for controlling the target object to be decomposed into parts;
and controlling the target three-dimensional model to be decomposed into a plurality of part models according to the explosion action file.
5. The method of claim 1 or 2, wherein after moving the target three-dimensional model corresponding to the target item into the main display area of the VR item display scene for display, the method further comprises:
receiving a second operation signal for the target item;
when the second operation signal is received, displaying an operation object three-dimensional model;
reading an interactive action file corresponding to the target object according to the second operation signal, wherein the interactive action file is used for controlling an operation object to interact with the target object;
and controlling the three-dimensional model of the operation object to interact with the target three-dimensional model according to the interaction action file, wherein the interaction mode comprises wearing interaction and using interaction.
6. The method of claim 1 or 2, wherein the receiving a selection signal for a target item in the VR item display scenario comprises:
displaying a selection cursor corresponding to a VR input device in the VR article display scene;
determining the object pointed by the selection cursor as the target object;
when a control signal sent by the VR input device is received, determining the control signal as the selection signal of the target item.
7. The method of claim 1 or 2, wherein the receiving a selection signal for a target item in the VR item display scenario comprises:
acquiring an eyeball motion track;
if the eyeball motion track points to an article and the stay time is longer than a threshold value, determining the article pointed by the eyeball motion track as the target article;
when a control signal sent by a VR input device is received, the control signal is determined to be the selection signal of the target item.
8. The method of claim 1 or 2, wherein after moving the target three-dimensional model corresponding to the target item into the main display area of the VR item display scene for display according to the selection signal, the method further comprises:
when a third operation signal to the target object is received, the target three-dimensional model is moved to the original display position of the target object.
9. An article display apparatus, the apparatus comprising:
a first display module configured to display a virtual display VR item display scenario, the VR item display scenario established based on a usage scenario of an item, the VR item display scenario including a main display area, a three-dimensional model of at least one item, and a virtual character model;
a first receiving module configured to receive a selection signal for a target item in the VR item display scene;
the display module is configured to move a target three-dimensional model corresponding to the target object to the main display area of the VR object display scene for display according to the selection signal, control the virtual character model to move to the target object according to the position of the target object, play a prestored animation for picking up the target object, and display the target object in an enlarged manner after the animation is played;
the apparatus is further configured to:
controlling the target three-dimensional model corresponding to the target article to be decomposed into a plurality of part models according to the received first operation signal for the target article; when a rotation signal for a target article is received, reading a rotation action file corresponding to the target article according to a rotation direction indicated by the rotation signal, wherein at least two rotation action files corresponding to the target article are stored in VR equipment, and different rotation action files correspond to different rotation directions;
and controlling the plurality of part models to rotate around the same rotating shaft or around the corresponding rotating shafts according to the rotating motion file.
10. The apparatus of claim 9, wherein the display module comprises:
a coordinate acquisition unit configured to acquire region coordinates of the main display region;
an amplification unit configured to perform amplification processing on the target three-dimensional model;
and the display unit is configured to move the amplified target three-dimensional model to the main display area for display according to the area coordinates.
11. The apparatus of claim 9 or 10, further comprising:
a second receiving module configured to receive a rotation signal for the target item;
a first reading module, configured to read a rotation motion file corresponding to the target object according to a rotation direction indicated by the rotation signal, where at least two rotation motion files corresponding to the target object are stored in the VR device, and different rotation motion files correspond to different rotation directions;
a first control module configured to control the target three-dimensional model to rotate according to the rotation motion file.
12. The apparatus of claim 9 or 10, further comprising:
the second reading module is configured to read an explosion action file corresponding to the target object according to the first operation signal, wherein the explosion action file is used for controlling the target object to be decomposed into parts;
and the second control module is configured to control the target three-dimensional model to be decomposed into a plurality of part models according to the explosion action file.
13. The apparatus of claim 9 or 10, further comprising:
a fourth receiving module configured to receive a second operation signal for the target item;
a display module configured to display the three-dimensional model of the operation object when the second operation signal is received;
a third reading module, configured to read an interaction action file corresponding to the target object according to the second operation signal, where the interaction action file is used to control an operation object to interact with the target object;
and the third control module is configured to control the operation object three-dimensional model to interact with the target three-dimensional model according to the interaction action file, wherein the interaction mode comprises wearing interaction and using interaction.
14. The apparatus of claim 9 or 10, wherein the first receiving module is configured to:
displaying a selection cursor corresponding to a VR input device in the VR article display scene;
determining the object pointed by the selection cursor as the target object;
when a control signal sent by the VR input device is received, determining the control signal as the selection signal of the target item.
15. The apparatus of claim 9 or 10, wherein the first receiving module is configured to:
acquiring an eyeball motion track;
if the eyeball motion track points to an article and the stay time is longer than a threshold value, determining the article pointed by the eyeball motion track as the target article;
when a control signal sent by a VR input device is received, the control signal is determined to be the selection signal of the target item.
16. The apparatus of claim 9 or 10, further comprising:
the resetting module is configured to move the target three-dimensional model to the original display position of the target item when receiving a third operation signal to the target item.
17. An article display apparatus, the apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
displaying a Virtual Reality (VR) article display scene, wherein the VR article display scene is established based on an article use scene, and the VR article display scene comprises a main display area, a three-dimensional model of at least one article and a virtual character model;
receiving a selection signal for a target item in the VR item display scene;
moving a target three-dimensional model corresponding to the target object to the main display area of the VR object display scene for display according to the selection signal, controlling the virtual character model to move to the target object according to the position of the target object, playing a prestored animation for picking up the target object, and displaying the target object in an enlarged manner after the animation is played;
the processor is further configured to:
controlling the target three-dimensional model corresponding to the target article to be decomposed into a plurality of part models according to the received first operation signal for the target article;
when a rotation signal for a target article is received, reading a rotation action file corresponding to the target article according to a rotation direction indicated by the rotation signal, wherein at least two rotation action files corresponding to the target article are stored in VR equipment, and different rotation action files correspond to different rotation directions;
and controlling the plurality of part models to rotate around the same rotating shaft or around the corresponding rotating shafts according to the rotating motion file.
18. A computer-readable medium, having stored thereon program instructions which, when executed by a processor, carry out the method of displaying an item according to any one of claims 1 to 8.
CN201711306315.9A 2017-12-11 2017-12-11 Article display method and device Active CN108038726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711306315.9A CN108038726B (en) 2017-12-11 2017-12-11 Article display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711306315.9A CN108038726B (en) 2017-12-11 2017-12-11 Article display method and device

Publications (2)

Publication Number Publication Date
CN108038726A CN108038726A (en) 2018-05-15
CN108038726B true CN108038726B (en) 2022-01-11

Family

ID=62101519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711306315.9A Active CN108038726B (en) 2017-12-11 2017-12-11 Article display method and device

Country Status (1)

Country Link
CN (1) CN108038726B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920778A (en) * 2018-06-12 2018-11-30 佛山欧神诺云商科技有限公司 A kind of modal position method of adjustment and its device
CN109242976A (en) * 2018-08-02 2019-01-18 实野信息科技(上海)有限公司 A method of based on the automatic rotary display of WebGL virtual reality
CN109254655A (en) * 2018-08-20 2019-01-22 北京京东金融科技控股有限公司 Device and method for article display
CN111949113B (en) * 2019-05-15 2024-10-29 阿里巴巴集团控股有限公司 Image interaction method and device applied to Virtual Reality (VR) scene
CN111538252B (en) * 2020-05-25 2021-09-21 厦门大学 Intelligent home demonstration system applying VR technology
CN111768269A (en) * 2020-06-22 2020-10-13 中国建设银行股份有限公司 Panoramic image interaction method and device and storage medium
CN111858740A (en) * 2020-07-14 2020-10-30 武汉欧特英吉工业有限公司 Multi-scene data visualization device and method
CN111782053B (en) * 2020-08-10 2023-04-28 Oppo广东移动通信有限公司 Model editing method, device, equipment and storage medium
CN112484242B (en) * 2020-11-30 2022-01-28 珠海格力电器股份有限公司 Air conditioner type selection method based on augmented reality and related device
CN113837833B (en) * 2021-09-18 2022-11-04 完美世界(北京)软件科技发展有限公司 Method and device for displaying articles based on role model
CN114895819B (en) * 2022-07-13 2022-09-13 麒砺创新技术(深圳)有限公司 Three-dimensional model intelligent display optimization method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894335A (en) * 2016-05-10 2016-08-24 曹屹 Three-dimensional display method and device for indoor articles
CN107330746A (en) * 2016-09-18 2017-11-07 安徽华陶信息科技有限公司 A kind of purchase method and system based on VR technologies

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894335A (en) * 2016-05-10 2016-08-24 曹屹 Three-dimensional display method and device for indoor articles
CN107330746A (en) * 2016-09-18 2017-11-07 安徽华陶信息科技有限公司 A kind of purchase method and system based on VR technologies

Also Published As

Publication number Publication date
CN108038726A (en) 2018-05-15

Similar Documents

Publication Publication Date Title
CN108038726B (en) Article display method and device
US11231845B2 (en) Display adaptation method and apparatus for application, and storage medium
US11880999B2 (en) Personalized scene image processing method, apparatus and storage medium
CN111970456B (en) Shooting control method, device, equipment and storage medium
CN107977083B (en) Operation execution method and device based on VR system
JP2019510321A (en) Virtual reality pass-through camera user interface elements
CN111045511B (en) Gesture-based control method and terminal equipment
JP2013141207A (en) Multi-user interaction with handheld projectors
US20240114214A1 (en) Video distribution system distributing video that includes message from viewing user
CN110751707B (en) Animation display method, animation display device, electronic equipment and storage medium
CN110782532B (en) Image generation method, image generation device, electronic device, and storage medium
US20240355071A1 (en) Mobile device and mobile device control method
CN113485626A (en) Intelligent display device, mobile terminal and display control method
CN115439171A (en) Commodity information display method and device and electronic equipment
KR20140102386A (en) Display apparatus and control method thereof
CN112783316A (en) Augmented reality-based control method and apparatus, electronic device, and storage medium
WO2024051063A1 (en) Information display method and apparatus and electronic device
CN116258544A (en) Information display method and device and electronic equipment
CN111782053B (en) Model editing method, device, equipment and storage medium
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
KR20220057388A (en) Terminal for providing virtual augmented reality and control method thereof
CN111782056B (en) Content sharing method, device, equipment and storage medium
EP4385589A1 (en) Method and ar glasses for ar glasses interactive display
CN110060355B (en) Interface display method, device, equipment and storage medium
CN115936800A (en) Commodity display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant