CN116258544A - Information display method and device and electronic equipment - Google Patents

Information display method and device and electronic equipment Download PDF

Info

Publication number
CN116258544A
CN116258544A CN202211103140.2A CN202211103140A CN116258544A CN 116258544 A CN116258544 A CN 116258544A CN 202211103140 A CN202211103140 A CN 202211103140A CN 116258544 A CN116258544 A CN 116258544A
Authority
CN
China
Prior art keywords
dimensional model
space
displaying
display
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211103140.2A
Other languages
Chinese (zh)
Inventor
郭震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211103140.2A priority Critical patent/CN116258544A/en
Publication of CN116258544A publication Critical patent/CN116258544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses an information display method, an information display device and electronic equipment, comprising the following steps: responding to a request for displaying a target object in an Augmented Reality (AR) mode, loading a three-dimensional model corresponding to the target object, and mapping the three-dimensional model into an AR space for displaying, wherein the target object is an object of a handheld device class with a display screen; detecting interactive operation executed by a user on the three-dimensional model through the AR space, wherein the interactive operation is used for simulating operation related to starting a target function in the target object; and displaying interface contents corresponding to the target function in the area where the display screen in the three-dimensional model is positioned according to the detected interaction operation. Through the embodiment of the application, the experience which is closer to that of purchasing or trying the commodity object under the online condition can be provided for the user, and shopping decision making is facilitated for the user.

Description

Information display method and device and electronic equipment
Technical Field
The application relates to the technical field of augmented reality, in particular to an information display method, an information display device and electronic equipment.
Background
In the commodity information service system, graphics, videos, live broadcast and the like are common modes for describing commodities, and a user can acquire characteristic information about the commodities through the description information so as to further help the user to make purchasing decisions. In recent years, a scheme of displaying commodity information through a three-dimensional model has also appeared, namely, by reconstructing the commodity in three dimensions, a three-dimensional dynamic effect on the commodity can be displayed for a user at a client side of a commodity information service system, interaction between the user and the commodity can be realized, for example, the user can trigger the commodity to rotate through a sliding screen and the like, so that the appearance of the commodity can be watched from various different perspectives, and the like.
However, even though the merchandise is presented through a three-dimensional model, the information available to the user is still relatively limited. Thus, it is a technical problem that needs to be addressed by those skilled in the art how to further provide more detailed information about a particular item to a user and to enable the user to obtain a more similar experience to when purchasing the item off-line.
Disclosure of Invention
The information display method, the information display device and the electronic equipment can provide the user with experience which is closer to that of purchasing or trying a commodity object on line, and are favorable for helping the user to make shopping decisions.
The application provides the following scheme:
an information display method, comprising:
responding to a request for displaying a target object in an Augmented Reality (AR) mode, loading a three-dimensional model corresponding to the target object, and mapping the three-dimensional model into an AR space for displaying; the target object is an object of a handheld device class with a display screen;
detecting interactive operation executed by a user on the three-dimensional model through the AR space, wherein the interactive operation is used for simulating operation related to starting a target function in the target object;
and displaying interface contents corresponding to the target function in the area where the display screen in the three-dimensional model is positioned according to the detected interaction operation.
Wherein the detecting the interactive operation performed by the user on the three-dimensional model through the AR space includes:
detecting gesture operation executed by a user on a display screen of the target object based on the AR space;
the interface content corresponding to the target function is displayed in the area where the display screen in the three-dimensional model is located, and the method comprises the following steps:
and displaying interface contents corresponding to the target function started by the gesture operation in the area of the display screen in the three-dimensional model.
Wherein the target function includes one of a photographing function, a flashlight function, an intelligent voice assistant function, or a music control function.
Wherein, still include:
and loading materials related to the target function in the process of displaying the interface content corresponding to the target function through the area where the display screen in the three-dimensional model is located, and displaying or playing through the AR space.
Wherein the method further comprises:
and providing prompt information about the execution mode of the interactive operation in an interface of the AR space.
Wherein the mapping the three-dimensional model into an AR space for presentation comprises:
identifying pose information of the hand image from the acquired image stream of the real environment;
and mapping the three-dimensional model to the position of the hand image in the AR space according to the identified pose information of the hand image for display.
The mapping the three-dimensional model to the position of the hand image in the AR space for display according to the pose information of the identified hand image comprises the following steps:
and when the gesture of the hand image is recognized as a semi-holding gesture with the palm facing upwards, mapping the three-dimensional model to a position of the hand image in the AR space in a right-side-up mode.
Wherein, still include:
when the gesture of the hand image is changed from the semi-holding gesture to the fist-holding gesture, and the gesture is continuously changed to the target gesture conforming to the turning condition, the three-dimensional model is turned to a state with the back face upwards, and is mapped into the AR space for displaying.
Wherein the three-dimensional model is the same as the actual size of the target object, the method further comprising:
size information of the three-dimensional model is displayed in the AR space.
Wherein, still include:
and if the area of the three-dimensional model is larger than that of the hand image, providing prompt information at the edge part of the three-dimensional model in the AR space.
Wherein, still include:
providing an object list area in an interface of the AR space, wherein the object list area is used for displaying at least one other similar object related to the target object and a corresponding AR display entrance;
and after receiving the AR display requests for the other similar objects through the AR display entrance, replacing the three-dimensional model displayed in the AR space with the three-dimensional model corresponding to the other similar objects.
An object information display device, comprising:
The AR display unit is used for responding to a request for displaying a target object in an augmented reality AR mode, loading a three-dimensional model corresponding to the target object and mapping the three-dimensional model into an AR space for display; the target object is an object of a handheld device class with a display screen;
the interactive behavior detection unit is used for detecting interactive operation executed by a user on the three-dimensional model through the AR space, and the interactive operation is used for simulating operation related to starting a target function in the target object;
and the interface content display unit is used for displaying the interface content corresponding to the target function in the area where the display screen in the three-dimensional model is positioned according to the detected interactive operation.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method of any of the preceding claims.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of the preceding claims.
According to a specific embodiment provided by the application, the application discloses the following technical effects:
according to the method and the device for simulating the interaction operation of the three-dimensional model, in the process of displaying the handheld device object with the display screen in the AR mode, interaction operation, which is executed by a user through the AR space, can be detected, and the interaction operation can be used for simulating operation, which is defined in the target object and is related to starting the target function. And then, according to the detected interaction operation, displaying interface contents corresponding to the target function in the area where the display screen in the three-dimensional model is positioned. Therefore, in the process of browsing specific handheld equipment commodities in an AR mode, a user can experience or test functions of specific commodity objects, so that experience in the process of purchasing or trying commodity objects on line is obtained, and shopping decision making is facilitated for the user in a commodity display scene.
Of course, not all of the above-described advantages need be achieved at the same time in practicing any one of the products of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a method provided by an embodiment of the present application;
FIG. 3 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of an apparatus provided by an embodiment of the present application;
fig. 5 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
First, in order to enhance the sense of reality of target object information such as a commodity, one way is to combine a three-dimensional reconstruction technique of the commodity with an AR (Augmented Reality ) technique for displaying the commodity. In the scheme, the three-dimensional reconstruction is performed on the commodity in advance, when the commodity is displayed at the client, a real-time image stream in the real world can be acquired through the camera of the terminal equipment, planes such as a table, a floor and the like in the image stream are positioned, and a three-dimensional model of the commodity is projected to the position of the plane in the real-world image stream, so that the state that the commodity is actually placed in the real-world environment is presented.
According to the scheme for displaying the commodity information through the AR technology, a user can more intuitively judge whether specific commodities are suitable for the real world environment where the user is located, for example, whether sofa household commodities are suitable for being placed in a living room of the user, and the like, so that the user is helped to make shopping decisions better.
However, in the prior art, when displaying merchandise information by the AR method, it is generally only possible to place a three-dimensional model of the merchandise in the AR space relatively mechanically. While the viewing angle may be changed by sliding the screen or rotating the cell phone, etc., to achieve a 360 degree view of the merchandise's appearance, and this is basically sufficient to aid the user in making shopping decisions for items such as sofas, tea tables, etc., for items of some other categories, such simple mechanical AR placement is often inadequate.
For example, for a handheld device such as a mobile phone, a user may need to view not only the appearance of a specific product, but also various functions of the device or experience, such as viewing the resolution of a display screen, and experiencing functions such as an intelligent voice assistant and music control in the mobile phone, when buying the handheld device. In addition, it may be desirable to experience the hands-on effect of the handset to determine if it fits itself in size, and so on.
Therefore, in the embodiment of the present application, for the commodity of the handheld device such as the mobile phone, in the process of displaying based on the AR manner, various interactive operations may be performed on a specific three-dimensional model, for example, including lighting a screen, triggering specific function options therein, and the like, and correspondingly, interface contents triggered by corresponding specific interactive operations may be displayed on a display screen portion of the three-dimensional model displayed in the AR space. That is, the user may experience or test the functionality of a handheld device such as a cell phone through the AR space. In addition, in an optional manner, a function of performing "hands-on" experience on the three-dimensional model of the specific commodity can be further provided, that is, the user can experience the hands-on effect of the commodity such as the mobile phone specifically required to be purchased based on the AR space.
From the system architecture perspective, referring to fig. 1, the solution provided in the embodiment of the present application may relate to a client and a server in a merchandise information service system. The server side is mainly used for providing a specific three-dimensional model of the commodity, and in the embodiment of the application, the three-dimensional model of the specific commodity can be generated by three-dimensional reconstruction according to the real size of the commodity, and in order to be able to experience various functions of the commodity, some materials including pictures, audio and video and other types of materials can be provided for displaying or playing in an AR space. The client is mainly used for supporting AR display, in order to improve the authenticity of the display effect, a high-performance AR rendering engine can be used for supporting rendering of the three-dimensional model in the AR space, scaling and other operations can be performed, in addition, specific pictures and other materials can be mapped to the area where the display screen of the specific three-dimensional model is located in the AR space for display under the triggering of user interaction operation, and the like.
Specific embodiments provided in the embodiments of the present application are described in detail below.
First, an embodiment of the present application provides a method for displaying merchandise information, referring to fig. 2, the method may include:
s201: responding to a request for displaying a target object in an Augmented Reality (AR) mode, loading a three-dimensional model corresponding to the target object, and mapping the three-dimensional model into an AR space for displaying; the target object is an object of a handheld device class with a display screen.
In the embodiment of the application, the specific target object may be a commodity, or may also be other objects, for example, an exhibit in a museum, or the like. In the commodity display scene, the specific target commodity can be a handheld equipment commodity with a display screen, for example, the specific target commodity can be mobile communication equipment such as a mobile phone. In specific implementation, an access for displaying the commodity through an AR manner may be provided through a page such as a commodity detail page, for example, the access may be provided in a main diagram display area of the commodity detail page, etc. In this way, the user can initiate a request for merchandise display by means of AR through the access portal during access to the merchandise details page. Of course, in specific implementation, the request may be initiated in other manners, for example, the commodities configured with the three-dimensional model may be aggregated into the same theme page, and an entry for displaying each commodity in an AR manner may be provided in the theme page, so that a user may directly initiate a request for displaying a certain target commodity in the theme page in the AR manner, and so on. In the embodiment of the application, the specific interaction mode in the AR display process is mainly described by taking a commodity as an example.
After receiving a specific AR display request, an AR space can be created, a three-dimensional model of a target commodity is loaded, and then the three-dimensional model is rendered into the AR space for display. Specifically, when an AR space is created, components such as a camera in the terminal device may be started first, and image acquisition may be performed on the real world. At this time, in an alternative manner, the user may be prompted to extend the hand portion to the front of the lens, and may be prompted to put the hand in a posture or the like, for example, as shown in fig. 3 (a), may be prompted to grasp the palm up, in a semi-holding posture, or the like. In this way, the client detects the hand image in the acquired image stream of the real environment, and after detecting the hand image, the client can also determine the information such as the pose (i.e. position and posture) of the hand.
When determining the pose of the hand, the hand can be reconstructed in real time in three dimensions according to the hand image identified in the real environment image stream, so as to obtain a plurality of three-dimensional hand key point coordinates, and then the pose of the hand can be determined according to the key point coordinates. The method comprises the steps of carrying out real hand image recognition in a real environment image, wherein the real hand image recognition can be realized through an existing algorithm model, and in addition, after the real hand image is recognized, the real hand image can be specifically subjected to three-dimensional Mesh reconstruction by utilizing a pre-trained deep learning algorithm model. The above algorithm model for performing hand image recognition and the algorithm model for performing three-dimensional Mesh reconstruction on the hand do not belong to the key content in the embodiments of the present application, and therefore will not be described in detail herein.
After the pose information of the specific hand image is identified, the three-dimensional model of the specific commodity can be mapped to the position where the hand image is located in the AR space for display. The hand pose is identified, so that the effect of holding a specific three-dimensional model by the hand can be displayed in the AR space, and the display position and the posture of the three-dimensional model in the AR space can be changed when the hand moves or changes the posture.
In particular, when a user holds a physical object of a device such as a mobile phone in the real world, a situation that a body part area is blocked by a finger tip is considered, but if a three-dimensional model is only displayed at a position where a hand image is located in an AR space, an effect that the three-dimensional model floats above a palm may occur, so that user experience is not realistic enough. Therefore, in a specific implementation, in order to enhance the sense of realism of the effect presented in the AR space, a three-dimensional hand model of a standard pose (which may be a basic hand model for convenience of description) may be established in advance for the three-dimensional model of the target commodity, and in a default state, the basic hand model may hold the three-dimensional model of the target commodity through the standard pose. That is, the three-dimensional model of the target commodity may be held by a base hand model, and the relative positional relationship between the three-dimensional model of the target commodity and the base hand model may be fixed so that the three-dimensional model of the commodity may move with the movement of the base hand model. Of course, the base hand model need not be displayed, and thus, may be in an invisible state to the user.
In the case of the basic hand model, the estimation result of the real hand pose of the user may include: the pose of the real hand relative to the rotation matrix and/or translation vector of the base hand model. That is, in real-time three-dimensional reconstruction of a real hand image of a user, a specific three-dimensional space may be aligned with the three-dimensional space used in the above-described creation of the base hand model. Thus, after a hand image is identified from the current AR space's real environment image stream, pose information about the hand image may be expressed using a rotation matrix and/or translation vector relative to the base hand model. Then, the estimated rotation matrix and/or translation vector can be applied to the basic hand model, so that the three-dimensional model of the target commodity can be projected into the real world image, and the effects that the finger part is covered on the three-dimensional model, the three-dimensional model rotates along with the hand and the like can be displayed.
In this embodiment of the present application, the three-dimensional model of a specific commodity may be according to the size 1 of the commodity entity: 1, so that the user can feel the size of a specific commodity object by combining the identification and positioning of the hand image, and further judge whether the size of the current commodity is suitable for the user or not, and the like. Alternatively, the dimensions of a particular three-dimensional model, including length, width, thickness, etc., may also be indicated in the AR space. For example, a specific display effect may be as shown in fig. 3 (B).
In addition, whether the area of the three-dimensional model of the specific commodity exceeds the palm area of the user can be judged, and the user can be prompted, for example, the commodity is excessively large in size and possibly unsuitable for the current user. The specific manner of providing the prompt information may be various, for example, color or pattern may be added at the edge position of the three-dimensional model, so as to prompt the user that the current model exceeds the palm position, and so on.
In the specific implementation, the three-dimensional model of the commodity can be displayed in different states in the AR space according to the change condition of the hand gesture of the user. For example, upon recognizing that the pose of the hand image is a semi-grip pose with the palm facing up, the three-dimensional model may be mapped right side up (i.e., facing the camera of the user's current device) to the AR space where the hand image is located. Of course, in the implementation process, the sense of reality of the display effect can be improved based on the aforementioned basic hand model and the like.
In addition, because the user may need to view the color, texture, and the like of the back of the device such as the mobile phone, in the embodiment of the application, the user may be supported to turn the three-dimensional model to the back for display. In particular, because the specific three-dimensional model can follow the hand motion during the implementation, in theory, if the user is palm-up before and the three-dimensional model in the AR space is displayed front-up, at this time, the user can directly turn the hand 180 degrees, change to back-up, and then change to back-up display of the three-dimensional model. However, in a specific implementation, the user needs to hold his/her mobile phone or other terminal device with one hand, and the other hand is used to experience in AR space in front of the lens, so the action of turning 180 the user with one hand may be relatively stiff, and after turning, since the hand is up and the three-dimensional model is down, a large area of the three-dimensional model is blocked by the hand image.
For this reason, in the embodiment of the present application, a manner of more conveniently viewing the back surface of the three-dimensional model may be provided. Specifically, if the user has a half-grip gesture with the palm facing upward, and the three-dimensional model is displayed with the front facing upward at the position of the hand, then, if the user wants to view the back of the three-dimensional model, the user may change to a fist-making state first, at this time, the specific three-dimensional model may not be displayed temporarily, then, the user may make a predetermined gesture, for example, may face the lens with the back of the hand facing upward, or compare a gesture representing winning, and so on, at this time, the back of the three-dimensional model may be displayed in the AR space. That is, when the posture of the hand image is changed from the semi-grip posture to the fist-grip posture and is continuously changed to the target posture conforming to the turning condition, the three-dimensional model may be turned to a state with the back face facing upward and mapped into the AR space for presentation. By the method, the user can conveniently execute interactive operation, and meanwhile, the back overall view of the three-dimensional model can be more completely checked.
S202: and detecting interactive operation executed by a user on the three-dimensional model through the AR space, wherein the interactive operation is used for simulating operation related to starting a target function in the target object.
S203: and displaying interface contents corresponding to the target function in the area where the display screen in the three-dimensional model is positioned according to the detected interaction operation.
On the basis of displaying a three-dimensional model of a specific commodity through an AR space, interactive operation of a user on the three-dimensional model through the AR space can be detected, and corresponding interface content can be displayed in the area where a display screen in the three-dimensional model is located according to the detected interactive operation. That is, in the process of using the handheld device specifically, the user mainly uses various functions provided by the commodity through the display screen, so that in the process of purchasing the specific commodity, the user needs to determine whether specific dimensions, appearance and the like meet requirements, and may need to start up to lighten the display screen and then experience the specific functions. To this end, in embodiments of the present application, this experience may also be simulated for the user through AR space.
The specific interactive operation may be various, for example, it may include performing an interactive operation of pressing the power-on key in a power-on key area on the three-dimensional model, at this time, the display screen of the three-dimensional model may be switched to a light state, a power-on animation may be played, and so on. Alternatively, if the specific merchandise supports waking up the display screen by double-clicking the display screen, the user may simulate performing the double-clicking operation on the display screen region of the three-dimensional model, and at this time, may switch the display screen to a lit state, and so on.
In addition, the specific interaction operation can be a gesture operation executed based on the display screen in the AR space, so that interface content triggered and displayed by the gesture operation is displayed in the area where the display screen in the three-dimensional model is located. For example, a specific gesture operation may be used to switch the display screen to a lit state, or may further include a gesture operation for evoking a target function of the target commodity.
For example, some mobile phone commodities support to execute a specific gesture by sliding on a display screen in a black screen state, so that a certain function in the mobile phone can be evoked. For example, specific functional options may include a photo function, a flashlight function, an intelligent voice assistant function, or a music control function, among others. Wherein different functional options may be invoked by different gestures. For example, a "O" gesture may be used to activate a photo function, a "V" gesture may be used to activate a flashlight function, and so on. Thus, after such a gesture is detected from the AR space, a corresponding target function option may be determined according to a specific gesture type, and then the corresponding interface content after the target function option is evoked may be presented in the display screen area of the three-dimensional model.
In order to further improve the experience effect of the user, in the process of displaying the interface content after the target function is evoked through the area where the display screen in the three-dimensional model is located, materials related to the target function can be loaded, and displayed or played through the AR space.
For example, if the target function that is evoked is a photographing function, a photographing interface may be displayed in the area where the display screen in the three-dimensional model is located. The photographing interface can further comprise a photographing control, and the user can further execute the operation of clicking the photographing control in the AR control in a simulation mode, and at the moment, a specific photo can be displayed in the area where the display screen of the three-dimensional model is located. The photo can be a photo material related to the current commodity, which is obtained in advance, and is loaded and mapped into the current AR space after the user simulates clicking the photographing control. In addition, the pose of the photo material can be adjusted according to the pose of the current three-dimensional model, for example, if the current three-dimensional model is inclined at a certain angle, the photo material can also be inclined at a corresponding angle, so that the sense of reality of the display effect is improved.
Or in another mode, after the user performs the operation of clicking the photographing control in a simulation mode, specific image content is intercepted from the real environment image stream acquired by the current terminal equipment, the image content is processed according to parameter (such as pixel and the like) information of the photographing component corresponding to the current target commodity, a target image is generated, and then the target image is mapped to the area where the display screen in the three-dimensional model is located for displaying. That is, in this manner, the process of taking a photograph using a three-dimensional model in the AR space can be better simulated, thereby improving the user experience.
If the flashlight function is started specifically through the gesture, interface content after the flashlight function is started can be displayed in the area where the display screen in the three-dimensional model is located, and in addition, a light source material can be added into the AR space so as to be used for simulating the effect of light rays emitted by the camera component of the three-dimensional model.
If the intelligent voice assistant function is started specifically through the gesture, interface content after the intelligent voice assistant function is started can be displayed in the area where the display screen in the three-dimensional model is located, and in addition, corresponding voice dialogue materials can be played.
If the music control function is started specifically through the gesture, the interface content which is triggered to be displayed after the music control function is started can be displayed in the area where the display screen in the three-dimensional model is located, and meanwhile, the corresponding music material can be played.
After various functions are started, the interface content itself displayed in the display screen area in the AR space can be loaded, or the interface content itself can be directly realized in a three-dimensional model by loading additionally stored picture materials.
It should be noted that, since one hand (for example, right hand) may be used to hold the current terminal device while the specific three-dimensional model is displayed by means of AR, the other hand (for example, left hand) is used for interaction in front of the lens, in this state, if the user performs various interactions on the display screen of the three-dimensional model with one hand while keeping the hand type in a certain posture, it may be inconvenient. For example, it is desirable to maintain a semi-grip position with the palm facing upward as a whole, at which time specific interaction operations can only be performed with the thumb, and the range of movement of the thumb may be limited such that some keys may not be reachable, or some gestures may be difficult to make, and so on. For this case, in an alternative embodiment, the user may perform the size of the upper hand experience and the experience of the function in two stages. For example, the left hand may be extended to the front of the lens first to perform a hands-on experience on the three-dimensional model of the commodity, and then if some interactive operations need to be performed on a specific three-dimensional model, the three-dimensional model may be placed on a certain plane in the real environment image acquired in the current AR space. For example, a user may have just one table in the vicinity during the AR experience, and then the table may be brought into the image acquisition range, after which the user may place the three-dimensional model on the table top by moving the left hand to the table top in AR space, and then perform various interactions with the three-dimensional model using the left hand.
In addition, it should be noted that, because the user performs the interactive operation on the specific three-dimensional model through the AR space, the interactive operation is mainly used for simulating the interactive operation defined in the target commodity and used for starting the target function, that is, what kind of interactive mode is specifically adopted to start a specific function, which depends on the definition of the starting mode about the function in the specific commodity. However, different products may have different starting modes for the same function, so in order to facilitate user operation, prompt information about execution modes of multiple interactive operations may also be provided in the interface of the AR space. In this way, in the process of checking a specific three-dimensional model through the AR space, a user can execute specific interactive operation through prompt information on an interface so as to realize experience of specific functions in the commodity.
In addition, in the embodiment of the present application, an AR display portal of other similar commodities related to the current commodity may also be provided in an interface of a specific AR space. In this way, after receiving an AR display request for other similar commodities through the AR display portal, the current three-dimensional model displayed in the AR space can be replaced by the three-dimensional model corresponding to the other similar commodities.
For example, as shown at 31 in fig. 3 (B), in the process of displaying a three-dimensional model of a certain mobile phone-like commodity by means of AR, a list of other mobile phone-like commodities related to the commodity may be displayed in an interface of the AR space. The other mobile phone commodities may be determined in various manners, for example, may be mobile phone commodities of other types in the same store as the commodity currently being displayed, or may be recommended across stores, for example, may be mobile phone commodities in other stores having the same price as the commodity currently being displayed, and so on. In specific implementation, specific commodity recommendation strategies can be various, and of course, the specifically recommended commodities need to be associated with a three-dimensional model, that is, only commodities with a three-dimensional model established in advance will enter the category of recommendation through the AR space.
After the commodity recommendation list is provided through the interface of the AR space, the user can conveniently initiate AR display requests for other similar commodities. For example, in the interface shown in fig. 3 (B), if the user has completed AR browsing on the current mobile phone product, the three-dimensional model of the other mobile phone product can be mapped into the current AR space directly by clicking on the other mobile phone product in the list, without executing exit from the current interface, re-launching the AR display request on the detail page of the certain other mobile phone product, and so on, so that the interaction efficiency can be improved.
In addition, in the process of replacing the three-dimensional model in the AR space by clicking the commodity of the commodity recommendation list in the interface of the AR space, the three-dimensional model of different commodities can be compared based on the real environment image in the same AR space, so that a user can select among a plurality of different commodities conveniently.
In summary, according to the embodiment of the present application, for a handheld device object with a display screen, in a process of displaying the handheld device object in an AR manner, an interactive operation performed by a user on the three-dimensional model through the AR space may be detected, where the interactive operation may be used to simulate an operation defined in the target object and related to starting a target function. And then, according to the detected interaction operation, displaying interface contents corresponding to the target function in the area where the display screen in the three-dimensional model is positioned. Therefore, in the process of browsing the specific handheld device type object through the AR mode, the user can experience or test the function of the specific object, so that the experience of purchasing or trying the object in an online manner is obtained, and shopping decision making is facilitated for the user in a commodity display scene.
It should be noted that, in the embodiments of the present application, the use of user data may be involved, and in practical applications, user specific personal data may be used in the schemes described herein within the scope allowed by applicable legal regulations in the country where the applicable legal regulations are met (for example, the user explicitly agrees to the user to actually notify the user, etc.).
Corresponding to the foregoing method embodiment, the embodiment of the present application further provides an information display apparatus, referring to fig. 4, where the apparatus may include:
the AR display unit 401 is configured to load a three-dimensional model corresponding to a target object in response to a request for displaying the target object in an augmented reality AR manner, and map the three-dimensional model into an AR space for display; the target object is an object of a handheld device class with a display screen;
an interaction behavior detection unit 402, configured to detect an interaction operation performed by a user on the three-dimensional model through the AR space, where the interaction operation is used to simulate an operation related to starting a target function in the target object;
and the interface content display unit 403 is configured to display, according to the detected interaction, interface content corresponding to the target function in an area where the display screen in the three-dimensional model is located.
Wherein the interactive operation performed by the user on the three-dimensional model through the AR space comprises: gesture operations performed based on a display screen in the AR space;
at this time, the interface content display unit may specifically be configured to:
and displaying interface contents corresponding to the target function started by the gesture operation in the area of the display screen in the three-dimensional model.
The target function may include a photographing function, a flashlight function, an intelligent voice assistant function, or a music control function, among others.
In particular, the apparatus may further include:
and the material loading unit is used for loading the materials related to the target function in the process of displaying the interface content corresponding to the target function through the area where the display screen in the three-dimensional model is located, and displaying or playing through the AR space.
And the interactive mode prompting unit is used for providing prompting information about the execution mode of the interactive operation in the interface of the AR space.
Wherein the AR display unit may be configured to:
identifying pose information of the hand image from the acquired image stream of the real environment;
and mapping the three-dimensional model to the position of the hand image in the AR space according to the identified pose information of the hand image for display.
Specifically, the AR display unit may be configured to:
and when the gesture of the hand image is recognized as a semi-holding gesture with the palm facing upwards, mapping the three-dimensional model to a position of the hand image in the AR space in a right-side-up mode.
Alternatively, the AR display unit may be further configured to:
when the gesture of the hand image is changed from the semi-holding gesture to the fist-holding gesture, and the gesture is continuously changed to the target gesture conforming to the turning condition, the three-dimensional model is turned to a state with the back face upwards, and is mapped into the AR space for displaying.
Specifically, the three-dimensional model may be the same as the actual size of the target object, and the apparatus may further include:
and the size display unit is used for displaying the size information of the three-dimensional model in the AR space.
In addition, the apparatus may further include:
and the size prompting unit is used for providing prompting information for the edge part of the three-dimensional model in the AR space if the area of the three-dimensional model is larger than the area of the hand image.
In addition, the apparatus may further include:
an object list providing unit, configured to provide an object list area in an interface of the AR space, where the object list area is configured to display at least one other homogeneous object related to the target object and its corresponding AR display entry;
And the model replacing unit is used for replacing the three-dimensional model displayed in the AR space with the three-dimensional model corresponding to the other similar objects after receiving the AR display request for the other similar objects through the AR display entrance.
In addition, the embodiment of the application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method of any one of the foregoing method embodiments.
And an electronic device comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of the preceding method embodiments.
In which fig. 5 illustrates an architecture of an electronic device, for example, device 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, an aircraft, and so forth.
Referring to fig. 5, device 500 may include one or more of the following components: a processing component 502, a memory 504, a power supply component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 502 may include one or more processors 520 to execute instructions to perform all or part of the steps of the methods provided by the disclosed subject matter. Further, the processing component 502 can include one or more modules that facilitate interactions between the processing component 502 and other components. For example, the processing component 502 may include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
Memory 504 is configured to store various types of data to support operations at device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, video, and the like. The memory 504 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 506 provides power to the various components of the device 500. Power supply components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 500.
The multimedia component 508 includes a screen between the device 500 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or sliding action, but also the duration and pressure associated with the touch or sliding operation. In some embodiments, the multimedia component 508 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the device 500 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 504 or transmitted via the communication component 516. In some embodiments, the audio component 510 further comprises a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 514 includes one or more sensors for providing status assessment of various aspects of the device 500. For example, the sensor assembly 514 may detect the on/off state of the device 500, the relative positioning of the components, such as the display and keypad of the device 500, the sensor assembly 514 may also detect a change in position of the device 500 or a component of the device 500, the presence or absence of user contact with the device 500, the orientation or acceleration/deceleration of the device 500, and a change in temperature of the device 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the device 500 and other devices, either wired or wireless. The device 500 may access a wireless network based on a communication standard, such as WiFi, or a mobile communication network of 2G, 3G, 4G/LTE, 5G, etc. In one exemplary embodiment, the communication part 516 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 504, including instructions executable by processor 520 of device 500 to perform the methods provided by the disclosed subject matter. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The method, the device and the electronic equipment for displaying commodity information provided by the application are described in detail, and specific examples are applied to the description of the principle and the implementation of the application, and the description of the examples is only used for helping to understand the method and the core idea of the application; also, as will occur to those of ordinary skill in the art, many modifications are possible in view of the teachings of the present application, both in the detailed description and the scope of its applications. In view of the foregoing, this description should not be construed as limiting the application.

Claims (14)

1. An information display method, comprising:
responding to a request for displaying a target object in an Augmented Reality (AR) mode, loading a three-dimensional model corresponding to the target object, and mapping the three-dimensional model into an AR space for displaying; the target object is an object of a handheld device class with a display screen;
detecting interactive operation executed by a user on the three-dimensional model through the AR space, wherein the interactive operation is used for simulating operation related to starting a target function in the target object;
and displaying interface contents corresponding to the target function in the area where the display screen in the three-dimensional model is positioned according to the detected interaction operation.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the detecting the interactive operation performed by the user on the three-dimensional model through the AR space comprises the following steps:
detecting gesture operation executed by a user on a display screen of the target object based on the AR space;
the interface content corresponding to the target function is displayed in the area where the display screen in the three-dimensional model is located, and the method comprises the following steps:
and displaying interface contents corresponding to the target function started by the gesture operation in the area of the display screen in the three-dimensional model.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the target function includes one of a photographing function, a flashlight function, an intelligent voice assistant function, or a music control function.
4. The method as recited in claim 1, further comprising:
and loading materials related to the target function in the process of displaying the interface content corresponding to the target function through the area where the display screen in the three-dimensional model is located, and displaying or playing through the AR space.
5. The method according to claim 1, wherein the method further comprises:
And providing prompt information about the execution mode of the interactive operation in an interface of the AR space.
6. The method according to any one of claim 1 to 5, wherein,
the mapping the three-dimensional model into an AR space for presentation includes:
identifying pose information of the hand image from the acquired image stream of the real environment;
and mapping the three-dimensional model to the position of the hand image in the AR space according to the identified pose information of the hand image for display.
7. The method of claim 6, wherein the step of providing the first layer comprises,
mapping the three-dimensional model to the position of the hand image in the AR space according to the identified pose information of the hand image for display, wherein the method comprises the following steps:
and when the gesture of the hand image is recognized as a semi-holding gesture with the palm facing upwards, mapping the three-dimensional model to a position of the hand image in the AR space in a right-side-up mode.
8. The method as recited in claim 7, further comprising:
when the gesture of the hand image is changed from the semi-holding gesture to the fist-holding gesture, and the gesture is continuously changed to the target gesture conforming to the turning condition, the three-dimensional model is turned to a state with the back face upwards, and is mapped into the AR space for displaying.
9. The method of claim 6, wherein the step of providing the first layer comprises,
the three-dimensional model is the same as the actual size of the target object, the method further comprising:
size information of the three-dimensional model is displayed in the AR space.
10. The method as recited in claim 9, further comprising:
and if the area of the three-dimensional model is larger than that of the hand image, providing prompt information at the edge part of the three-dimensional model in the AR space.
11. The method according to any one of claims 1 to 5, further comprising:
providing an object list area in an interface of the AR space, wherein the object list area is used for displaying at least one other similar object related to the target object and a corresponding AR display entrance;
and after receiving the AR display requests for the other similar objects through the AR display entrance, replacing the three-dimensional model displayed in the AR space with the three-dimensional model corresponding to the other similar objects.
12. An information display device, comprising:
the AR display unit is used for responding to a request for displaying a target object in an augmented reality AR mode, loading a three-dimensional model corresponding to the target object and mapping the three-dimensional model into an AR space for display; the target object is an object of a handheld device class with a display screen;
The interactive behavior detection unit is used for detecting interactive operation executed by a user on the three-dimensional model through the AR space, and the interactive operation is used for simulating operation related to starting a target function in the target object;
and the interface content display unit is used for displaying the interface content corresponding to the target function in the area where the display screen in the three-dimensional model is positioned according to the detected interactive operation.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the steps of the method of any of claims 1 to 11.
14. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read for execution by the one or more processors, perform the steps of the method of any of claims 1 to 11.
CN202211103140.2A 2022-09-09 2022-09-09 Information display method and device and electronic equipment Pending CN116258544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211103140.2A CN116258544A (en) 2022-09-09 2022-09-09 Information display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211103140.2A CN116258544A (en) 2022-09-09 2022-09-09 Information display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116258544A true CN116258544A (en) 2023-06-13

Family

ID=86686843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211103140.2A Pending CN116258544A (en) 2022-09-09 2022-09-09 Information display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116258544A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118444816A (en) * 2024-07-05 2024-08-06 淘宝(中国)软件有限公司 Page processing method, electronic device, storage medium and program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118444816A (en) * 2024-07-05 2024-08-06 淘宝(中国)软件有限公司 Page processing method, electronic device, storage medium and program product

Similar Documents

Publication Publication Date Title
CN112162671B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN108776568B (en) Webpage display method, device, terminal and storage medium
CN108038726B (en) Article display method and device
CN112181573B (en) Media resource display method, device, terminal, server and storage medium
CN111970523B (en) Information display method, device, terminal, server and storage medium
CN111986076A (en) Image processing method and device, interactive display device and electronic equipment
US20200380724A1 (en) Personalized scene image processing method, apparatus and storage medium
CN111368114B (en) Information display method, device, equipment and storage medium
CN111343329B (en) Lock screen display control method, device and storage medium
CN113407291A (en) Content item display method, device, terminal and computer readable storage medium
CN112261481B (en) Interactive video creating method, device and equipment and readable storage medium
CN109634489A (en) Method, apparatus, equipment and the readable storage medium storing program for executing made comments
CN110209316B (en) Category label display method, device, terminal and storage medium
CN110782532B (en) Image generation method, image generation device, electronic device, and storage medium
US20210042980A1 (en) Method and electronic device for displaying animation
CN115527014A (en) Information display method and electronic equipment
CN115439171A (en) Commodity information display method and device and electronic equipment
CN112783316A (en) Augmented reality-based control method and apparatus, electronic device, and storage medium
CN116258544A (en) Information display method and device and electronic equipment
CN113609358B (en) Content sharing method, device, electronic equipment and storage medium
CN113194329A (en) Live broadcast interaction method, device, terminal and storage medium
WO2024051063A1 (en) Information display method and apparatus and electronic device
CN113485596A (en) Virtual model processing method and device, electronic equipment and storage medium
EP4125274A1 (en) Method and apparatus for playing videos
CN116596611A (en) Commodity object information display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination