CN112634339A - Commodity object information display method and device and electronic equipment - Google Patents

Commodity object information display method and device and electronic equipment Download PDF

Info

Publication number
CN112634339A
CN112634339A CN201910906733.4A CN201910906733A CN112634339A CN 112634339 A CN112634339 A CN 112634339A CN 201910906733 A CN201910906733 A CN 201910906733A CN 112634339 A CN112634339 A CN 112634339A
Authority
CN
China
Prior art keywords
original
image
visual angle
information
commodity object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910906733.4A
Other languages
Chinese (zh)
Other versions
CN112634339B (en
Inventor
高博
王立波
李晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910906733.4A priority Critical patent/CN112634339B/en
Publication of CN112634339A publication Critical patent/CN112634339A/en
Application granted granted Critical
Publication of CN112634339B publication Critical patent/CN112634339B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a commodity object information display method, a commodity object information display device and electronic equipment, wherein the method comprises the following steps: in the process of displaying the original image information of the commodity object, obtaining motion data of the associated terminal equipment; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information; determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data; and generating a new visual angle image of the commodity object according to the visual angle change value and the depth information and displaying the new visual angle image. By the embodiment of the application, interaction with a user can be realized at lower cost.

Description

Commodity object information display method and device and electronic equipment
Technical Field
The present application relates to the field of information display technologies, and in particular, to a method and an apparatus for displaying information of a commodity object, and an electronic device.
Background
In a commodity object information service system, a commodity object information page is usually provided, and the display form of commodity objects in the page mainly includes pictures, videos, three-dimensional models and the like, wherein the pictures are the most common. In specific implementation, the commodity object information in the display form may be provided by a merchant, or the merchant may provide a commodity object to a professional shooting person in the system for shooting, and then the commodity object is released to a page such as a detail page for display.
The picture and video shooting is easy, only ordinary camera equipment is needed to shoot the commodity object, but after the commodity object is shot, the content which can be presented is fixed, and a user cannot interact with the commodity object. However, although the display form based on the three-dimensional model can realize interaction with the user, for example, the user can change the viewing angle by rotating the terminal device, etc., and view the details of the commodity object from multiple angles in all directions, this method usually needs to use professional equipment to shoot, and also needs to use three-dimensional materials, etc., so that the manufacturing cost is high, and the method is not easy to popularize.
Therefore, how to realize the interaction with the user at lower cost becomes a technical problem to be solved by those skilled in the art.
Disclosure of Invention
The application provides a commodity object information display method and device and electronic equipment, which can realize interaction with a user at lower cost.
The application provides the following scheme:
a commodity object information display method comprises the following steps:
in the process of displaying the original image information of the commodity object, obtaining motion data of the associated terminal equipment; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new visual angle image of the commodity object according to the visual angle change value and the depth information and displaying the new visual angle image.
A commodity object information display method comprises the following steps:
acquiring original image information of a commodity object, wherein the original image information is acquired by acquiring a real object of the commodity object under an original visual angle and comprises depth information;
selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
and providing the original view angle image and the images under the plurality of different view angles in the information page of the commodity object.
A scene information display method comprises the following steps:
obtaining initial image information of a target scene, wherein the initial image information is obtained by collecting the target scene under an original visual angle and comprises depth information;
in the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
A merchandise object information display device, comprising:
the motion data acquisition unit is used for acquiring motion data of the associated terminal equipment in the process of displaying the original image information of the commodity object; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
the visual angle change value determining unit is used for determining a visual angle change value of the current visual angle of the user relative to the original visual angle according to the motion data;
and the new visual angle image generating unit is used for generating and displaying a new visual angle image of the commodity object according to the visual angle change value and the depth information.
A merchandise object information display device, comprising:
the system comprises an original image obtaining unit, a depth information acquiring unit and a display unit, wherein the original image obtaining unit is used for obtaining original image information of a commodity object, and the original image information is obtained by collecting a real object of the commodity object under an original visual angle and comprises depth information;
the visual angle selecting unit is used for selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
the image generating unit is used for generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
and the image display unit is used for providing the original view angle image and the images under the different view angles in the information page of the commodity object.
A scene information presentation apparatus comprising:
an initial image obtaining unit, configured to obtain initial image information of a target scene, where the initial image information is obtained by acquiring the target scene at an original view angle and includes depth information;
the motion data acquisition unit is used for acquiring motion data of the associated terminal equipment in the process of displaying the target scene information;
the visual angle change value determining unit is used for determining a visual angle change value of the current visual angle of the user relative to the original visual angle according to the motion data;
and the new visual angle image generating unit is used for generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
in the process of displaying the original image information of the commodity object, obtaining motion data of the associated terminal equipment; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new visual angle image of the commodity object according to the visual angle change value and the depth information and displaying the new visual angle image.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring original image information of a commodity object, wherein the original image information is acquired by acquiring a real object of the commodity object under an original visual angle and comprises depth information;
selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
and providing the original view angle image and the images under the plurality of different view angles in the information page of the commodity object.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
obtaining initial image information of a target scene, wherein the initial image information is obtained by collecting the target scene under an original visual angle and comprises depth information;
in the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
According to the specific embodiments provided herein, the present application discloses the following technical effects:
according to the embodiment of the application, the commodity object image shot at a certain specific angle is obtained, the depth information in the image is obtained, the three-dimensional structure of the object can be restored according to the simple two-dimensional image, interaction with a user is further achieved, specifically, the user can change the view angle through rotating the terminal equipment and the like, and the system can generate images at more view angles for the user. Therefore, the embodiment of the application can realize interaction with the user at lower cost.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
FIG. 2 is a flow chart of a first method provided by an embodiment of the present application;
FIG. 3 is a schematic illustration of an interface provided by an embodiment of the present application;
FIG. 4 is a flow chart of a second method provided by embodiments of the present application;
FIG. 5 is a flow chart of a third method provided by embodiments of the present application;
FIG. 6 is a schematic diagram of a first apparatus provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a second apparatus provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a third apparatus provided by an embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
In the embodiment of the application, in order to realize interaction with a user at low cost, an image of a specific article (for example, a real object corresponding to a commodity object or the like) may be taken first, and a depth map in the image may be obtained, so that a three-dimensional structure of a scene may be restored based on depth information. By the mode, a single picture shot at a certain visual angle can be utilized to generate a new visual angle image within a certain variation range, so that a three-dimensional effect is obtained, and interaction with a user is realized. That is, the user can change the viewing angle and view images at more viewing angles by rotating the mobile terminal device such as a mobile phone.
Specifically, the scheme provided by the embodiment of the present application may be applied to various specific application systems, for example, in a commodity object information service system, as shown in fig. 1, a client (including an independent application program, or existing in a web page form, etc.) and a server may be generally provided for a user, and specific commodity object information may be issued by the server and displayed to the user through the client. In the scheme adopted by the embodiment of the application, the commodity object image information published in the server can be the image information comprising the depth information, and the image only needs to be a single picture for shooting a commodity object real object under a certain visual angle and the depth information contained in the single picture; in addition, the function of recovering the three-dimensional structure of the scene based on the depth information in the image can be realized in the client. Therefore, in the process that a user checks the image information of a specific commodity object through the client, the user can initiate interaction in a mode of rotating terminal equipment such as a mobile phone, the client can determine the visual angle change value of the user according to the motion data of the terminal equipment and the like, further recover the three-dimensional structure of the commodity object according to the specific visual angle change value and the depth information in the original image, and display the image at a new visual angle, so that interaction with the user is realized.
The following describes in detail a specific implementation provided by the embodiments of the present application.
Example one
First, the first embodiment provides a method for displaying information of a commodity object, and referring to fig. 2, the method may specifically include:
s210: in the process of displaying the original image information of the commodity object, obtaining motion data of the associated terminal equipment; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
the original image information of the commodity object can be shot by a publisher user (for example, a merchant user and the like) of the commodity object and published into the system; or the merchant user can provide the real object of the specific commodity object for background staff in the system, and the background staff shoots the original image and then releases the original image to the system. Of course, in the embodiment of the present application, the requirement for shooting the original image is relatively low, and therefore, in general, the shooting may be performed by a merchant user. The specific original image capturing mode may be various. For example, in one mode, the commodity object may be photographed by a hardware device such as a binocular camera or a TOF (Time of flight) depth camera. Specifically, the binocular camera obtains a depth map by calculating a disparity map of left and right binocular images by using the principle that the binocular disparity is in inverse proportion to the depth, for example, a depth map can be shot and collected by a two-camera mobile phone which is popular in the market at present, so that a merchant user can directly select a certain viewing angle through the mobile phone camera to shoot a real object of a commodity object, and original image data meeting conditions can be obtained. The TOF depth camera directly measures the depth of a scene by using the time of flight, and thus, the depth information in the captured image can be obtained. In addition, depth information in an image can be obtained even without a binocular camera or a TOF depth camera using a general monocular camera, and in particular, depth prediction can be performed by a monocular depth estimation method based on depth learning. That is, it is also possible to photograph a target object by a general monocular camera and then predict depth information by a depth learning method, and the like. In summary, the image information with depth information may be obtained in a variety of ways. It should be noted that no matter what kind of hardware equipment is specifically adopted for shooting, in the embodiment of the present application, only one picture needs to be shot from a certain viewing angle, and since three-dimensional modeling and the like are not needed, although depth information is included in the picture, the picture belongs to a two-dimensional picture, and in the subsequent step, on the basis of the two-dimensional picture, a three-dimensional structure of an object is reconstructed to provide images at more viewing angles.
In a specific implementation, a specific originally captured image may include both a foreground image and a background image, and in general, a user focuses more on the foreground image. Therefore, in the preferred embodiment of the present application, the background or the non-salient region of the original image can be subjected to the blurring preprocessing, so as to improve the effect of the restored three-dimensional image. Specifically, in the embodiment of the present application, blurring processing of an image background may be performed based on the depth information, for example, a RGB color image may be subjected to global gaussian blurring, and then a front background area is divided by setting a front background threshold value using the depth image, so as to obtain a Mask (Mask) image of the front background. And then fusing the original image and the global fuzzy image together according to the Mask image to obtain a background blurring image, namely, using the global fuzzy image in a background area, using the original image in a foreground area, and finally smoothing the front background transition area, so that the overall effect is more natural. After the original captured image is background-blurred, the background-blurred image can be distributed as an original image of the commodity object in the system.
After the original image information of the commodity object is issued to the system, the commodity object can be displayed through a client in the user mobile terminal device. In a specific implementation manner, the thumbnail information of the original image associated with the commodity object and the operation option for interacting with the original image may be provided in an information page associated with the commodity object. For example, in a detail information page of a commodity object, a main map of the commodity object is usually provided, but in the embodiment of the present application, a thumbnail of the original image may be used as the main map of the commodity object. Meanwhile, prompt information can be provided in a position near the main map or the like, for example, a user is prompted to interact by clicking the main map or the like to view more detailed image information, and the like. Therefore, after the operation request is received through the operation option, the original image can be displayed in a full screen mode, and the acquisition of the motion data of the associated terminal equipment is started. Therefore, a new visual angle image of the commodity object can be generated and displayed according to the real-time detected visual angle change value in a full-screen state.
Specifically, after a request for displaying details of the original image is received, interaction with a user can be started, the specific interaction mode can be that the user changes the view angle by rotating mobile terminal equipment such as a mobile phone, and correspondingly, the client can present images under more view angles, and the images under more view angles are not really shot in advance but are calculated in real time after three-dimensional reconstruction is carried out on the two-dimensional image under the original view angle.
The information about the change of the visual angle in the user interaction process can be obtained through the motion data of the terminal equipment. For example, the specific motion data may include: the method comprises the steps of obtaining pose angle data of the terminal device in a current space by a gyroscope sensor arranged in the terminal device, expressing a gravity acceleration vector obtained by an acceleration sensor in a reference coordinate system of the current device, instantaneously accelerating the device in various aspects, instantaneously rotating the device in various axial directions and the like.
S220: determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
in the embodiment of the application, image information under other visual angles in a certain angle range can be provided for a user on the basis of an original two-dimensional image, and a three-dimensional structure of an object is presented, so that a visual angle change value of the user can be obtained firstly, and then an image under a corresponding visual angle is generated on the basis.
The specific view angle change value may be obtained by converting according to the motion data of the terminal device. For example, the pose angle of the terminal device relative to the current space output by the gyro sensor may be used to characterize changes in the user's perspective. Specifically, as the user turns the device up and down in the vertical direction, the change in the viewing angle in the vertical direction may be represented by the user's pitch angle pitch. When the user turns the device left and right in the horizontal direction, the change in the viewing angle in the horizontal direction can be represented by the user roll angle roll. The two component sets are integrated to express the view angle value of any direction and size in the plane:
Figure BDA0002213494680000081
it should be noted that, in a specific implementation, a user may continuously rotate the device during an interaction process, and may not change the direction and the angle of the device, so that the motion data of the device continuously changes, and accordingly, the viewing angle also continuously changes, and a viewing angle change value from the initial viewing angle can be calculated every time the viewing angle changes.
In addition, the problem that the change of the viewing angle is not smooth due to phenomena such as artificial shaking and the like may exist in the process of rotating the device by a user, so that if the image under the corresponding viewing angle is generated directly according to the converted viewing angle change value, the phenomenon of shaking may occur in the specific generated image change condition, and the image display effect is affected. For this reason, in the preferred embodiment of the present application, an attenuation term may be added to the current view angle variation value to adjust the view angle variation value.
Specifically, assume that the initial pose (view angle) of the terminal device is:
Figure BDA0002213494680000091
the real-time pose of the terminal equipment motion is as follows:
Figure BDA0002213494680000092
the attenuation term is:
Figure BDA0002213494680000093
the real-time view change value is:
Figure BDA0002213494680000094
that is, the current view angle change value may be adjusted by the attenuation term to attenuate the change acceleration so that the view angle change is smooth and natural. In addition, parameters such as a reset threshold and a reset rate can be set, so that the attenuation term is increased along with the increase of the variation acceleration. That is, the value of the attenuation term is not fixed, but can be adjusted according to the changing acceleration of the actual viewing angle, for example,
Figure BDA0002213494680000096
may be (0, 0) and may be based on the value of the previous time (η'x,η′y) Updating the value (eta) of the current timexy):
Figure BDA0002213494680000095
Figure BDA0002213494680000101
Wherein e is reset threshold, r is reset rate, and updating is performed
Figure BDA0002213494680000102
Thereby weakening the view angle change value of the next frame and making the view angle change smooth and natural. When the user changes a posture (namely, the user changes to a target view angle) and keeps still for a certain time under the posture, because the acceleration of the view angle change becomes large, the attenuation term can be set to be equal to or equivalent to (basically equal to) the view angle change value of the target view angle relative to the original view angle, namely, the view angle change value after being adjusted by the attenuation term tends to be 0, so that the view angle reset is realized, and the original image is displayed by returning to the original view angle again. That is to say, in the process of the user rotating the device for interaction, if the user stops rotating after rotating to a certain gesture, it means that the user may complete the interaction process, and therefore, the user can return to the original image at the original viewing angle for displaying again, and the subsequent user can continue to interact on the basis of the gesture. In the embodiment of the present application, through the setting of the attenuation term, the reset threshold, the reset rate, and the like, the effect of automatic visual angle reset can be achieved while the visual angle is smoothly changed.
S230: and generating and displaying the new visual angle image according to the visual angle change value and the depth information.
After the specific view angle change value is determined, the new view angle image can be generated according to the specific view angle change value and the depth information in the image. For example, as shown in fig. 3, assuming that the original image of the merchandise object of a certain footwear at the left side is at the original viewing angle, after the user rotates the terminal device along the longitudinal axis to the right by a certain angle, which corresponds to the user changing the viewing angle, the image of the merchandise object at the new viewing angle can be displayed in the terminal device, that is, as can be seen from the figure, after the user changes the viewing angle, the corresponding angle change occurs to the merchandise image displayed in the terminal device. To facilitate viewing of this variation, the right side of fig. 3 shows a rotated front view of the apparatus.
Of course, in the embodiment of the present application, after the user changes the angle of view, the corresponding image in the new angle of view does not need to be shot in advance for the commodity object in the new angle of view, but is generated by calculation based on one image shot in the original angle of view. Specifically, the specific manner of generating the new perspective image may be various, for example, in one manner, the target position information of the pixel in the original image in the new perspective image to be generated may be determined first, and then the pixel mapping may be performed according to the target position information to generate the new perspective image. That is, in the case where an image at one viewing angle is known, when the viewing angle is switched to another viewing angle in the vicinity, the image at the new viewing angle is substantially the same as the pixels in the image at the original viewing angle, but a certain change in position may occur. The embodiment of the application provides an implementation scheme for three-dimensional structure recovery based on an original image on the basis of the principle.
In the specific implementation, because the number of the pixel points in the original image is large, if the positions of the pixels in the new view angle image are determined one by one, the calculated amount is large, and the performance requirement on the terminal equipment is high. Therefore, in an optional manner, the original image may be subjected to texture sampling to obtain the keypoint pixels in the original image, target position information of the keypoint pixels in the new perspective image to be generated is determined, and then, the target position information of the other pixels in the new perspective image to be generated is determined in a point propagation manner. Specifically, when a new perspective image is generated, the mapping of the keypoint pixels can be performed according to the target position information, and the neighboring pixels of the keypoint pixels are filled in a point propagation manner, so that the new perspective image can be generated. The specific keypoint pixels may include: the corner points in the original image and the points of the foreground and background separation region.
Specifically, according to the binocular parallax imaging principle, for a pixel point P0 in the original image, the corresponding texture coordinate is (x0, y0), the corresponding depth value is z, the position of the pixel point in the new view image after the view angle is changed is P1, and the camera focal length is f, so that the following formula is satisfied:
Figure BDA0002213494680000111
therefore:
Figure BDA0002213494680000112
therefore, by the method, the generation of the new view angle image can be regarded as the rendering problem of the texture mapping, the texture sampling is carried out in the fragment shader of the rendering pipeline such as OpenGL according to the principle, and the new view angle image can be generated in real time by filling the adjacent pixels in the cavity area.
In the specific implementation process, more interactions can be performed from the information level of the commodity object in the interaction process through the original view angle image and the image of the new view angle generated in real time. For example, in any angle of view, if a long press or the like is performed on the commodity object image, more information about the commodity object may be provided, for example, including price attributes, selling point information (whether it belongs to a hot selling item, whether it participates in a certain preferential event, etc.), and the like.
Therefore, in the embodiment of the application, the commodity object image shot at a certain specific angle is used, and the depth information in the image is acquired, so that the three-dimensional structure of the object can be restored according to the simple two-dimensional image, further, the interaction with the user is realized, specifically, the user can change the view angle by rotating the terminal equipment and the like, and the system can generate images at more view angles for the user. Therefore, the embodiment of the application can realize interaction with the user at lower cost.
In particular, in order to avoid distortion of the generated new viewing angle image, the new viewing angle image may be generated within a viewing angle variation value range (for example, within 20 degrees or the like) in the vicinity of the original viewing angle. That is, a small-angle three-dimensional structure restoration can be achieved around the original viewing angle.
The specific view angle change range can also be configured by the user according to the requirement of the user, that is, the view angle change value range can be determined according to the configuration information of the user. For example, a parallax intensity change adjustment interface may be provided for the user, the intensity may be classified into "strong", "medium", "weak", and the like, and the greater the intensity, the more obvious the parallax change.
In addition, the visual effect during the change of the visual angle can be changed by setting the focal position. For example, it may include a focus in front, behind, or in the middle, etc. If the focus is in front, the background area in the image changes obviously in the visual angle change process, and if the focus is in back, the foreground area in the image changes obviously; if the focus is in the middle position, the foreground and background may change simultaneously, and so on. In particular, the setting of the focal position may be configured by the user. That is, the user may also be provided with an entry for configuring the focus position, and the user may select a specific focus position according to his needs or preferences.
Example two
In the first embodiment, after receiving an interaction request from a user, images with more viewing angles are generated in real time according to motion data of the terminal device and displayed. In the second embodiment, more images at different viewing angles may be generated in advance from the original image, and then directly displayed in the information page associated with the commodity object. Therefore, the merchant user and the like only need to provide the commodity object image under the original visual angle, the system can generate and display images under more visual angles, and the commodity image information is supplemented. In addition, because a plurality of images with different visual angles can be directly displayed in the commodity object information page in this way, the user can see images with more visual angles and simultaneously the requirement on the performance of the terminal equipment is reduced (the terminal equipment is not required to be provided with hardware devices such as a gyroscope). Specifically, referring to fig. 4, a second embodiment provides a method for displaying information of a commodity object, where the method may specifically include:
s410: acquiring original image information of a commodity object, wherein the original image information is acquired by acquiring a real object of the commodity object under an original visual angle and comprises depth information;
s420: selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
s430: generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
the specific viewing angle offset may be determined according to specific needs, and of course, in order to avoid excessive distortion of the image, the viewing angle offset may be controlled within a preset offset range, for example, within 20 degrees of left and right of the original viewing angle, and so on.
S440: and providing the original view angle image and the images under the plurality of different view angles in the information page of the commodity object.
Specifically, when the original view angle image and more images at different view angles are provided in the commodity object information page, the images at different view angles can be displayed respectively. Certainly, in practical applications, in order to avoid too much occupation of information page resources by too many images with similar contents and to better compare images between different viewing angles, the original viewing angle image and the images under the multiple different viewing angles may be combined into a same display image for displaying in the information page.
EXAMPLE III
In the first and second embodiments, a specific commodity object information display method is provided for a specific scene, such as a commodity object information service system, and in the method, three-dimensional structure restoration based on an original two-dimensional image can be realized. In practical application, the specific scheme for realizing three-dimensional structure restoration based on the original two-dimensional image can also be applied to other scenes. Such as museums, scenic spots, and may even include medical scenes, entertainment scenes, and the like.
Specifically, in a medical scene, the scene of online medical treatment may be mainly involved, and during the process of making an online consultation of a patient with a doctor about a disease condition, the patient may generally need to submit some picture information, including a picture of a specific affected part, and the doctor may make a diagnosis by looking at the picture of the affected part. In the embodiment of the application, after a patient submits a picture, the picture under more visual angles can be generated and sent to a doctor together, so that the doctor can view details of an affected part from more visual angles; or, an interactive option can be provided at a client side of the doctor, after the user clicks and views the picture provided by the patient, the picture can be displayed in a full screen mode, and the pictures at more visual angles can be viewed in a mode of rotating terminal equipment such as a mobile phone, so that the condition of the affected part can be known more comprehensively, and the like.
As another example, in an entertainment scene, such as some photo processing tools, the basic function is to process the photos taken by the user, including beautifying, adding some material, and so on. By the aid of the scheme, interaction of changing visual angles to view more images can be provided for the photos taken by the user. Specifically, after a user loads a photo into a photo processing tool, a specific interactive entry may be provided, through which the user may view images from more viewing angles by rotating the mobile phone, and the like, and may also store the images from more viewing angles locally in the mobile phone, and the like.
In summary, in the third embodiment of the present application, a method for displaying scene information is further provided, with reference to fig. 5, where the method specifically includes:
s510: obtaining initial image information of a target scene, wherein the initial image information is obtained by collecting the target scene under an original visual angle and comprises depth information;
the initial image information of a specific scene can also be obtained in various ways, including taking a picture of a target scene from an initial viewing angle by using a binocular camera or a TOF depth camera, or taking a picture by using a common monocular camera, and estimating depth information therein by using a depth learning algorithm, and the like.
S520: in the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
s530: determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
s540: and generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
The specific implementation manners of obtaining the device motion data, determining the view angle change value, generating a new view angle image, and the like may be the same as those in the first embodiment, and therefore, the implementation may be performed with reference to the first embodiment, and details are not described here.
Corresponding to the first embodiment, an embodiment of the present application further provides a merchandise object information display apparatus, referring to fig. 6, the apparatus may include:
a motion data obtaining unit 610, configured to obtain motion data of a related terminal device in a process of displaying original image information of a commodity object; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
a view change value determining unit 620, configured to determine a view change value of the current view of the user relative to the original view according to the motion data;
a new perspective image generating unit 630, configured to generate and display a new perspective image of the commodity object according to the perspective change value and the depth information.
Wherein the original image information is received from a client associated with a publisher user of the commodity object.
In a specific implementation, the apparatus may further include:
the operation option providing unit is used for providing thumbnail information of the original image related to the commodity object and operation options for interacting with the original image in an information page related to the commodity object;
and the interaction starting unit is used for displaying the original image in a full screen mode after receiving an operation request through the operation option and starting to acquire the motion data of the associated terminal equipment so as to generate and display a new visual angle image of the commodity object according to the visual angle change value determined in real time.
In a specific implementation, the apparatus may further include:
and the background blurring processing unit is used for performing background blurring processing on the original image according to the depth information.
The viewing angle change value determination unit may be specifically configured to:
and determining the view angle change value of the user according to the terminal equipment pose angle data output by the sensor equipped in the terminal equipment.
In addition, the apparatus may further include:
and the visual angle change value adjusting unit is used for adjusting the current visual angle change value by setting an attenuation item so as to weaken the visual angle change acceleration.
Wherein the decay term increases with increasing acceleration of the view angle change.
In addition, the apparatus may further include:
and the visual angle resetting unit is used for setting the attenuation item to be equal to or equivalent to the visual angle change value of the target visual angle relative to the original visual angle if the attenuation item is kept static after rotating to the target visual angle, so that the visual angle change value approaches to zero, resetting the visual angle to the original visual angle and displaying the original image.
The new perspective image generating unit may specifically include:
the position determining subunit is used for determining target position information of pixels in the original image in a new view angle image to be generated;
and the mapping subunit is used for mapping pixels according to the target position information to generate the new view angle image.
The position determining subunit may specifically include:
a key point pixel determination subunit, configured to perform texture sampling on the original image to obtain key point pixels in the original image;
a key point position determining subunit, configured to determine target position information of the key point pixel in the new perspective image to be generated;
and the other-point position determining subunit is used for determining the target position information of the other pixels in the new view angle image to be generated in a point propagation mode.
Wherein the keypoint pixels comprise: the corner points in the original image and the points of the foreground and background separated regions (the points of the regions with obvious depth information change).
In another specific implementation, the new viewing angle image may be generated within a range of viewing angle variation values around the original viewing angle.
The range of the view angle change value can be determined according to configuration information of a user.
In addition, the apparatus may further include:
and the visual effect control unit is used for determining the visual effect in the visual angle change process according to the focal position.
Wherein the focal position may be determined according to configuration information of a user.
Corresponding to the second embodiment, the embodiment of the present application further provides a device for displaying information of a commodity object, referring to fig. 7, where the device may specifically include:
an original image obtaining unit 701, configured to obtain original image information of a commodity object, where the original image information is obtained by collecting a real object of the commodity object at an original view angle, and includes depth information;
a view selecting unit 702, configured to select a plurality of different views by offsetting the original view;
an image generating unit 703, configured to generate images under different viewing angles according to viewing angle offsets of the different viewing angles with respect to the original viewing angle and the depth information;
an image displaying unit 704, configured to provide the original perspective image and the images at the multiple different perspectives in the information page of the commodity object.
Wherein the viewing angle offset is within a preset offset range.
Specifically, the image display unit may be specifically configured to:
and synthesizing the original view angle image and the images under the different view angles into the same display image for displaying in the information page.
Corresponding to the three phases of the embodiment, the embodiment of the present application further provides a scene information display apparatus, referring to fig. 8, the apparatus may include:
an initial image obtaining unit 810, configured to obtain initial image information of a target scene, where the initial image information is obtained by collecting the target scene at an original view angle and includes depth information;
a motion data obtaining unit 820, configured to obtain motion data of a related terminal device in a process of displaying the target scene information;
a view change value determining unit 830, configured to determine, according to the motion data, a view change value of a current view of the user relative to an original view;
and a new perspective image generating unit 840, configured to generate and display a new perspective image of the target scene according to the perspective change value and the depth information.
Furthermore, an embodiment of the present application further provides an electronic device, including:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
in the process of displaying the original image information of the commodity object, obtaining motion data of the associated terminal equipment; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new visual angle image of the commodity object according to the visual angle change value and the depth information and displaying the new visual angle image.
An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring original image information of a commodity object, wherein the original image information is acquired by acquiring a real object of the commodity object under an original visual angle and comprises depth information;
selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
and providing the original view angle image and the images under the plurality of different view angles in the information page of the commodity object.
And another electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
obtaining initial image information of a target scene, wherein the initial image information is obtained by collecting the target scene under an original visual angle and comprises depth information;
in the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
Where fig. 9 exemplarily illustrates the architecture of an electronic device, for example, the device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, an aircraft, etc.
Referring to fig. 9, device 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls the overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods provided by the disclosed solution. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 906 provides power to the various components of the device 900. The power components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia components 908 include a screen that provides an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the device 900. For example, the sensor component 914 may detect an open/closed state of the device 900, the relative positioning of components, such as a display and keypad of the device 900, the sensor component 914 may also detect a change in the position of the device 900 or a component of the device 900, the presence or absence of user contact with the device 900, orientation or acceleration/deceleration of the device 900, and a change in the temperature of the device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the device 900 and other devices in a wired or wireless manner. The device 900 may access a wireless network based on a communication standard, such as WiFi, a 2G, 3G, 4G/LTE, 5G, etc. mobile communication network, or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the device 900 to perform the methods provided by the present disclosure is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The method, the device and the electronic device for displaying the commodity object information provided by the application are introduced in detail, specific examples are applied in the description to explain the principle and the implementation mode of the application, and the description of the embodiments is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, the specific embodiments and the application range may be changed. In view of the above, the description should not be taken as limiting the application.

Claims (25)

1. A commodity object information display method is characterized by comprising the following steps:
in the process of displaying the original image information of the commodity object, obtaining motion data of the associated terminal equipment; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new visual angle image of the commodity object according to the visual angle change value and the depth information and displaying the new visual angle image.
2. The method of claim 1,
the raw image information is received from a client associated with a publisher user of the commodity object.
3. The method of claim 1, further comprising, prior to the method:
providing thumbnail information of the original image associated with the commodity object and operation options for interacting with the original image in an information page associated with the commodity object;
and after receiving an operation request through the operation option, displaying the original image in a full screen mode, and starting to acquire motion data of associated terminal equipment so as to generate and display a new visual angle image of the commodity object according to the visual angle change value determined in real time.
4. The method of claim 1, further comprising:
and performing background blurring processing on the original image according to the depth information.
5. The method of claim 1,
the determining a view angle change value of the user according to the motion data includes:
and determining the view angle change value of the user according to the terminal equipment pose angle data output by the sensor equipped in the terminal equipment.
6. The method of claim 1, further comprising:
and adjusting the current view angle change value by setting an attenuation item to weaken the view angle change acceleration.
7. The method of claim 6,
the decay term increases with increasing acceleration of the view angle change.
8. The method of claim 7, further comprising:
and if the target visual angle is rotated to be kept static, setting the attenuation term to be equal to or equivalent to the visual angle change value of the target visual angle relative to the original visual angle, so that the visual angle change value approaches to zero, resetting the visual angle to the original visual angle, and displaying the original image.
9. The method of claim 1,
the generating the new view image comprises:
determining target position information of pixels in the original image in a new visual angle image to be generated;
and mapping pixels according to the target position information to generate the new visual angle image.
10. The method of claim 9,
the determining of the target position information of the pixel in the original image in the new view angle image to be generated includes:
obtaining key point pixels in the original image by performing texture sampling on the original image;
determining target position information of the key point pixels in a new visual angle image to be generated;
and determining the target position information of the other pixels in the new view angle image to be generated in a point propagation mode.
11. The method of claim 10,
the keypoint pixels include: the corner points in the original image and the points of the foreground and background separation region.
12. The method of claim 1,
and generating the new visual angle image within the range of the visual angle change value near the original visual angle.
13. The method of claim 12,
and the range of the view angle change value is determined according to the configuration information of the user.
14. The method of claim 1, further comprising:
and determining the visual effect in the visual angle change process according to the focal position.
15. The method of claim 14,
the focal position is determined based on configuration information of a user.
16. A commodity object information display method is characterized by comprising the following steps:
acquiring original image information of a commodity object, wherein the original image information is acquired by acquiring a real object of the commodity object under an original visual angle and comprises depth information;
selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
and providing the original view angle image and the images under the plurality of different view angles in the information page of the commodity object.
17. The method of claim 16,
the visual angle offset is within a preset offset range.
18. The method of claim 16,
the providing the original perspective image and the images at the plurality of different perspectives comprises:
and synthesizing the original view angle image and the images under the different view angles into the same display image for displaying in the information page.
19. A scene information display method is characterized by comprising the following steps:
obtaining initial image information of a target scene, wherein the initial image information is obtained by collecting the target scene under an original visual angle and comprises depth information;
in the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
20. An apparatus for displaying information on an object of merchandise, comprising:
the motion data acquisition unit is used for acquiring motion data of the associated terminal equipment in the process of displaying the original image information of the commodity object; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
the visual angle change value determining unit is used for determining a visual angle change value of the current visual angle of the user relative to the original visual angle according to the motion data;
and the new visual angle image generating unit is used for generating and displaying a new visual angle image of the commodity object according to the visual angle change value and the depth information.
21. An apparatus for displaying information on an object of merchandise, comprising:
the system comprises an original image obtaining unit, a depth information acquiring unit and a display unit, wherein the original image obtaining unit is used for obtaining original image information of a commodity object, and the original image information is obtained by collecting a real object of the commodity object under an original visual angle and comprises depth information;
the visual angle selecting unit is used for selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
the image generating unit is used for generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
and the image display unit is used for providing the original view angle image and the images under the different view angles in the information page of the commodity object.
22. A scene information presentation apparatus, comprising:
an initial image obtaining unit, configured to obtain initial image information of a target scene, where the initial image information is obtained by acquiring the target scene at an original view angle and includes depth information;
the motion data acquisition unit is used for acquiring motion data of the associated terminal equipment in the process of displaying the target scene information;
the visual angle change value determining unit is used for determining a visual angle change value of the current visual angle of the user relative to the original visual angle according to the motion data;
and the new visual angle image generating unit is used for generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
23. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
in the process of displaying the original image information of the commodity object, obtaining motion data of the associated terminal equipment; the original image information is obtained by collecting a real object of the commodity object under an original view angle, and comprises depth information;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating a new visual angle image of the commodity object according to the visual angle change value and the depth information and displaying the new visual angle image.
24. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
acquiring original image information of a commodity object, wherein the original image information is acquired by acquiring a real object of the commodity object under an original visual angle and comprises depth information;
selecting a plurality of different visual angles in a mode of offsetting the original visual angle;
generating images under different visual angles according to visual angle offset of the different visual angles relative to the original visual angle and the depth information;
and providing the original view angle image and the images under the plurality of different view angles in the information page of the commodity object.
25. An electronic device, comprising:
one or more processors; and
a memory associated with the one or more processors for storing program instructions that, when read and executed by the one or more processors, perform operations comprising:
obtaining initial image information of a target scene, wherein the initial image information is obtained by collecting the target scene under an original visual angle and comprises depth information;
in the process of displaying the target scene information, obtaining motion data of associated terminal equipment;
determining a view angle change value of the current view angle of the user relative to the original view angle according to the motion data;
and generating and displaying a new visual angle image of the target scene according to the visual angle change value and the depth information.
CN201910906733.4A 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment Active CN112634339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906733.4A CN112634339B (en) 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906733.4A CN112634339B (en) 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112634339A true CN112634339A (en) 2021-04-09
CN112634339B CN112634339B (en) 2024-05-31

Family

ID=75282861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906733.4A Active CN112634339B (en) 2019-09-24 2019-09-24 Commodity object information display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112634339B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891057A (en) * 2021-11-18 2022-01-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006011153A2 (en) * 2004-07-30 2006-02-02 Extreme Reality Ltd. A system and method for 3d space-dimension based image processing
CN101271583A (en) * 2008-04-28 2008-09-24 清华大学 Fast image drafting method based on depth drawing
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
US20100066732A1 (en) * 2008-09-16 2010-03-18 Microsoft Corporation Image View Synthesis Using a Three-Dimensional Reference Model
CN102427547A (en) * 2011-11-15 2012-04-25 清华大学 Multi-angle stereo rendering apparatus
WO2013039470A1 (en) * 2011-09-12 2013-03-21 Intel Corporation Using motion parallax to create 3d perception from 2d images
US20150102995A1 (en) * 2013-10-15 2015-04-16 Microsoft Corporation Automatic view adjustment
CN105096180A (en) * 2015-07-20 2015-11-25 北京易讯理想科技有限公司 Commodity information display method and apparatus based augmented reality
CN107945282A (en) * 2017-12-05 2018-04-20 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) The synthesis of quick multi-view angle three-dimensional and methods of exhibiting and device based on confrontation network
CN108198044A (en) * 2018-01-30 2018-06-22 北京京东金融科技控股有限公司 Methods of exhibiting, device, medium and the electronic equipment of merchandise news
CN108234985A (en) * 2018-03-21 2018-06-29 南阳师范学院 The filtering method under the dimension transformation space of processing is rendered for reversed depth map
CN109218706A (en) * 2018-11-06 2019-01-15 浙江大学 A method of 3 D visual image is generated by single image
US20190035157A1 (en) * 2017-07-26 2019-01-31 Samsung Electronics Co., Ltd. Head-up display apparatus and operating method thereof
CN109584340A (en) * 2018-12-11 2019-04-05 苏州中科广视文化科技有限公司 New Century Planned Textbook synthetic method based on depth convolutional neural networks

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006011153A2 (en) * 2004-07-30 2006-02-02 Extreme Reality Ltd. A system and method for 3d space-dimension based image processing
CN101271583A (en) * 2008-04-28 2008-09-24 清华大学 Fast image drafting method based on depth drawing
US20100066732A1 (en) * 2008-09-16 2010-03-18 Microsoft Corporation Image View Synthesis Using a Three-Dimensional Reference Model
CN101631257A (en) * 2009-08-06 2010-01-20 中兴通讯股份有限公司 Method and device for realizing three-dimensional playing of two-dimensional video code stream
WO2013039470A1 (en) * 2011-09-12 2013-03-21 Intel Corporation Using motion parallax to create 3d perception from 2d images
CN102427547A (en) * 2011-11-15 2012-04-25 清华大学 Multi-angle stereo rendering apparatus
US20150102995A1 (en) * 2013-10-15 2015-04-16 Microsoft Corporation Automatic view adjustment
CN105096180A (en) * 2015-07-20 2015-11-25 北京易讯理想科技有限公司 Commodity information display method and apparatus based augmented reality
US20190035157A1 (en) * 2017-07-26 2019-01-31 Samsung Electronics Co., Ltd. Head-up display apparatus and operating method thereof
CN107945282A (en) * 2017-12-05 2018-04-20 洛阳中科信息产业研究院(中科院计算技术研究所洛阳分所) The synthesis of quick multi-view angle three-dimensional and methods of exhibiting and device based on confrontation network
CN108198044A (en) * 2018-01-30 2018-06-22 北京京东金融科技控股有限公司 Methods of exhibiting, device, medium and the electronic equipment of merchandise news
CN108234985A (en) * 2018-03-21 2018-06-29 南阳师范学院 The filtering method under the dimension transformation space of processing is rendered for reversed depth map
CN109218706A (en) * 2018-11-06 2019-01-15 浙江大学 A method of 3 D visual image is generated by single image
CN109584340A (en) * 2018-12-11 2019-04-05 苏州中科广视文化科技有限公司 New Century Planned Textbook synthetic method based on depth convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
赵林;高新波;田春娜;: "正面人脸图像合成方法综述", 中国图象图形学报, no. 01, 16 January 2013 (2013-01-16) *
魏闪闪;谢巍;贺志强;: "数字视频稳像技术综述", 计算机研究与发展, no. 09, 15 September 2017 (2017-09-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891057A (en) * 2021-11-18 2022-01-04 北京字节跳动网络技术有限公司 Video processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112634339B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN110602396B (en) Intelligent group photo method and device, electronic equipment and storage medium
KR102215166B1 (en) Providing apparatus, providing method and computer program
CN110321048B (en) Three-dimensional panoramic scene information processing and interacting method and device
JP2020507136A (en) VR object synthesizing method, apparatus, program, and recording medium
CN103945045A (en) Method and device for data processing
EP3221851A1 (en) Systems and methods for 3d capture of objects using multiple range cameras and multiple rgb cameras
TWI547901B (en) Simulating stereoscopic image display method and display device
CN109218630B (en) Multimedia information processing method and device, terminal and storage medium
TW201701051A (en) Panoramic stereoscopic image synthesis method, apparatus and mobile terminal
US20150326847A1 (en) Method and system for capturing a 3d image using single camera
CN112614228A (en) Method and device for simplifying three-dimensional grid, electronic equipment and storage medium
WO2016184285A1 (en) Article image processing method, apparatus and system
CN109308740B (en) 3D scene data processing method and device and electronic equipment
CN112634339B (en) Commodity object information display method and device and electronic equipment
CN112738399B (en) Image processing method and device and electronic equipment
CN107204026B (en) Method and device for displaying animation
CN112511815B (en) Image or video generation method and device
JP7296735B2 (en) Image processing device, image processing method and program
CN115379195B (en) Video generation method, device, electronic equipment and readable storage medium
CN116939275A (en) Live virtual resource display method and device, electronic equipment, server and medium
CN106713893B (en) Mobile phone 3D solid picture-taking methods
CN117201883A (en) Method, apparatus, device and storage medium for image editing
CN114143455B (en) Shooting method and device and electronic equipment
CN113721874A (en) Virtual reality picture display method and electronic equipment
CN113989424A (en) Three-dimensional virtual image generation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant