WO2023202349A1 - 三维标签的交互呈现方法、装置、设备、介质和程序产品 - Google Patents

三维标签的交互呈现方法、装置、设备、介质和程序产品 Download PDF

Info

Publication number
WO2023202349A1
WO2023202349A1 PCT/CN2023/085213 CN2023085213W WO2023202349A1 WO 2023202349 A1 WO2023202349 A1 WO 2023202349A1 CN 2023085213 W CN2023085213 W CN 2023085213W WO 2023202349 A1 WO2023202349 A1 WO 2023202349A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
target
dimensional model
point
scene
Prior art date
Application number
PCT/CN2023/085213
Other languages
English (en)
French (fr)
Inventor
王怡丁
Original Assignee
如你所视(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 如你所视(北京)科技有限公司 filed Critical 如你所视(北京)科技有限公司
Publication of WO2023202349A1 publication Critical patent/WO2023202349A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling

Definitions

  • the present disclosure relates to the field of computer technology, and in particular, to an interactive presentation method, device, electronic device, storage medium and computer program product for three-dimensional labels.
  • Three-dimensional scenes can immerse users into the environment through interactive three-dimensional dynamic views that integrate multi-source information, allowing users to watch different content from different perspectives.
  • the information about the display objects can be presented in the three-dimensional scene in the form of labels.
  • Embodiments of the present disclosure provide an interactive presentation method, device, electronic device, storage medium, and computer program product for three-dimensional labels, which are used to improve the display effect of three-dimensional labels in a three-dimensional scene.
  • a method for interactive presentation of three-dimensional labels includes:
  • At least one target three-dimensional model is determined from the three-dimensional models visible to the user, and the target three-dimensional model corresponds to one three-dimensional label to be marked and a plurality of three-dimensional labels for marking three-dimensional labels.
  • the reference point corresponding to the target three-dimensional model and closest to the point is determined as the mark point corresponding to the target three-dimensional model, and the reference plane where the mark point is located is determined as the display plane corresponding to the target three-dimensional model. , obtain the mark points and display plane corresponding to the target three-dimensional model;
  • the three-dimensional label corresponding to the target three-dimensional model is presented in the three-dimensional scene.
  • an interactive presentation device for three-dimensional labels includes a first determination unit configured to determine the position and perspective of the user based on the user's point position and perspective when browsing the three-dimensional scene. At least one target three-dimensional model is determined in the three-dimensional model, and the target three-dimensional model corresponds to a three-dimensional label to be marked, a plurality of reference points for marking the three-dimensional labels, and a reference plane where each of the reference points is located;
  • the second determination unit is configured to determine the reference point corresponding to the target three-dimensional model and closest to the point as the mark point corresponding to the target three-dimensional model, and determine the reference plane where the mark point is located as the reference point. Determine the display plane corresponding to the target three-dimensional model, and obtain the mark points and display plane corresponding to the target three-dimensional model;
  • the pose determination unit is configured to determine the spatial pose of the three-dimensional label corresponding to the target three-dimensional model in the three-dimensional scene based on the marker point corresponding to the target three-dimensional model and the display plane;
  • the label presentation unit is configured to present the three-dimensional label corresponding to the three-dimensional model in the three-dimensional scene based on the spatial pose of the three-dimensional label corresponding to the target three-dimensional model in the three-dimensional scene.
  • an electronic device including: a memory for storing a computer program product;
  • a processor configured to execute a computer program product stored in the memory, and when the computer program product is executed, implement the interactive presentation method of a three-dimensional label provided in any one of the above embodiments of the present disclosure.
  • a computer-readable storage medium with program code stored thereon, and the program code can be called and executed by a processor to implement any of the methods provided by any of the above embodiments of the present disclosure.
  • An interactive presentation method for three-dimensional labels is provided, with program code stored thereon, and the program code can be called and executed by a processor to implement any of the methods provided by any of the above embodiments of the present disclosure.
  • a computer program product including computer program instructions.
  • the computer program instructions are executed by a processor, the interactive presentation of the three-dimensional label provided in any of the above embodiments of the present disclosure is realized. method.
  • the target 3D model visible to the user can be determined from the 3D models in the 3D scene based on the user's point position and perspective when browsing the 3D scene, and the reference point in the target 3D model that is closest to the point can be determined point as a marker point, determine the reference plane where the marker point is located as the display plane; then determine the spatial pose of the three-dimensional label based on the marker point and the display plane, and finally present at least one target three-dimensional object in the three-dimensional scene according to the spatial pose of the three-dimensional label
  • the 3D label corresponding to the model It can ensure the matching degree of the spatial pose and point position of the 3D label, allowing users to obtain the information of the 3D model more intuitively and conveniently, thereby improving the display effect of the 3D label in the 3D scene.
  • Figure 1 is a schematic flow chart of one embodiment of the interactive presentation method of three-dimensional labels of the present disclosure
  • Figure 2 is a schematic diagram of a presentation method of three-dimensional labels in one embodiment of the interactive presentation method of three-dimensional labels of the present disclosure
  • Figure 3 is a schematic scene diagram of one embodiment of the interactive presentation method of three-dimensional labels of the present disclosure
  • Figure 4 is a schematic diagram of the marking position of the three-dimensional label in one embodiment of the interactive presentation method of the three-dimensional label of the present disclosure
  • Figure 5 is a schematic flowchart of another embodiment of the interactive presentation method of three-dimensional labels of the present disclosure.
  • Figure 6 is a schematic flowchart of another embodiment of the interactive presentation method of three-dimensional labels of the present disclosure.
  • Figure 7 is a schematic structural diagram of an embodiment of an interactive presentation device for three-dimensional labels of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure.
  • plural may refer to two or more than two, and “at least one” may refer to one, two, or more than two.
  • the term "and/or" in the disclosure is only an association relationship describing related objects, indicating that there can be three relationships.
  • a and/or B can mean: A alone exists, and A and B exist simultaneously. There are three cases of B alone.
  • the character "/" in this disclosure generally indicates that the related objects are in an "or" relationship.
  • Embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general or special purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with terminal devices, computer systems, servers and other electronic devices include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, mainframe computer systems and distributed cloud computing technology environments including any of the above systems, etc.
  • Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system executable instructions (such as program modules) being executed by the computer system.
  • program modules may include routines, programs, object programs, components, logic, data structures, etc., that perform specific tasks or implement specific abstract data types.
  • the computer system/server may be implemented in a distributed cloud computing environment where tasks are performed by remote processing devices linked through a communications network.
  • program modules may be located on local or remote computing system storage media including storage devices.
  • Figure 1 shows a flow chart of one embodiment of the interactive presentation method of three-dimensional labels of the present disclosure. As shown in Figure 1, the process includes the following steps:
  • Step 110 Based on the user's point and perspective when browsing the three-dimensional scene, determine at least one target three-dimensional model from the three-dimensional models visible to the user.
  • the target three-dimensional model corresponds to a three-dimensional label to be marked, a plurality of reference points for marking the three-dimensional labels, and the reference plane where each reference point is located.
  • the target three-dimensional model represents a user-visible three-dimensional model to be marked.
  • the user browses the three-dimensional scene the user's visible area in the three-dimensional scene can be determined based on the point position and perspective.
  • the three-dimensional model to be marked located within the visible area is the target three-dimensional model.
  • the three-dimensional label to be marked represents a three-dimensional label that has not yet been presented in the three-dimensional scene. It is a label with three-dimensional characteristics (such as conforming to the rules of perspective) generated in advance based on the description information of the three-dimensional model. It is used to display the description information of the three-dimensional model. For example, it can include The name, size, material, price, etc. of the 3D model.
  • the reference point corresponding to the target 3D model and the reference plane where it is located can be determined in advance based on the spatial shape of the target 3D model and the pose information of the target 3D model in the 3D scene.
  • the reference point represents the position where the 3D label can be marked.
  • Reference The plane is used to constrain the spatial posture of the three-dimensional label.
  • the spatial coordinates in the three-dimensional scene can be used to represent the reference point, and the spatial coordinates and parameters of the four corner points of the reference plane in the three-dimensional scene can be used.
  • the normal vector of the reference plane represents the reference plane.
  • the reference point may be located on the surface of the target three-dimensional model, and the reference plane is perpendicular or parallel to the plane constituting the bounding box of the target three-dimensional model.
  • each three-dimensional model may include 6 reference planes, each reference plane being parallel to the two planes constituting the bounding box.
  • service providers can use three-dimensional scenes (such as VR scenes) to simulate real scenes and display multiple objects (such as furniture and other items) to users in the form of scenes.
  • three-dimensional scenes such as VR scenes
  • you can set multiple points according to your needs, and then build a view frustum based on the points and camera parameters, and use the view frustum to determine the user's field of view when browsing the three-dimensional scene.
  • the 3D model constructed based on the appearance and attributes of the display object can be placed in the 3D scene according to the preset pose, and the completed 3D scene can be formed into a Json file.
  • Users can obtain the Json file of the three-dimensional scene from the service provider through an electronic device (such as a terminal computer or a smartphone), and parse the Json file through the pre-loaded three-dimensional application software in the electronic device to present the three-dimensional scene on the electronic device.
  • an electronic device such as a terminal computer or a smartphone
  • the execution subject may be an electronic device used by the user to browse the three-dimensional scene, such as a terminal computer or a smartphone.
  • the execution subject can determine the visible area in the three-dimensional scene based on the user's point and perspective, and then determine the three-dimensional model to be marked within the visible area as the target three-dimensional model.
  • step 110 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by the first determination unit run by the processor.
  • Step 120 Determine the reference point corresponding to the target 3D model and closest to the point position as the mark point corresponding to the target 3D model, and determine the reference plane where the mark point is located as the display plane corresponding to the target 3D model to obtain the target. Marking points and display planes corresponding to the 3D model.
  • the marking point represents the marking position of the three-dimensional label
  • the display plane is used to constrain the spatial posture of the three-dimensional label when it is marked to the marking point.
  • the execution subject can traverse at least one reference point corresponding to the target three-dimensional model, determine the distance between the at least one reference point and the point based on the spatial coordinates of the reference point and the point, and then determine the distance between the at least one reference point and the point.
  • the reference point with the smallest distance among the points is determined as the mark point corresponding to the target 3D model, and the plane where the mark point is located is used as the display plane.
  • step 120 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a second determination unit run by the processor.
  • Step 130 Determine the spatial pose of the three-dimensional label corresponding to the target three-dimensional model in the three-dimensional scene based on the mark points and display planes corresponding to the target three-dimensional model.
  • the execution subject may first determine the marking position of the three-dimensional label based on the marking point, for example, aligning or coinciding the attachment point of the three-dimensional label with the marking point in a specific direction; and then determine the spatial posture of the three-dimensional label based on the display plane, for example Make the length direction of the three-dimensional label parallel to the length direction or width direction of the display plane, make the plane on which the three-dimensional label displays label information coincide with the display plane, and determine the plane on which the three-dimensional label displays label information based on the normal direction of the display plane.
  • Orientation to inherit the spatial characteristics of the display plane in the three-dimensional scene for example, it can be the perspective characteristics of near, far, and small. In this way, the spatial pose of the three-dimensional label in the three-dimensional scene can be determined.
  • step 130 may be executed by the processor calling corresponding instructions stored in the memory, or may be executed by the posture determination unit run by the processor.
  • Step 140 Based on the spatial pose of the three-dimensional label corresponding to the target three-dimensional model in the three-dimensional scene, present the three-dimensional label corresponding to the three-dimensional model in the three-dimensional scene.
  • the execution subject can use the transform application in CSS (Cascading Style Sheet) to determine the three-dimensional matrix corresponding to the target three-dimensional model based on the mark points corresponding to the target three-dimensional model and the spatial position of the display plane; and then call one of the following or Multiple functions: Rotate3d(), translate3d(), perspective(), etc., use a three-dimensional matrix to perform 3D rotation, translation or perspective on the three-dimensional label corresponding to the target three-dimensional model.
  • the three-dimensional label is marked at the mark point with the spatial posture determined in step 130, thereby realizing the presentation of the three-dimensional label in the three-dimensional scene.
  • the marking method of the three-dimensional label may be, for example, that the attachment point of the three-dimensional label coincides with the marking point, or the three-dimensional label may be aligned with the marking point along a preset direction and then linked to the target three-dimensional model through a connection line.
  • Different 3D models in the same 3D scene can be marked in a variety of ways.
  • the attachment point represents a point in the three-dimensional label used to locate the mark position of the three-dimensional label. For example, it may be a corner point or center point of the three-dimensional label or other representative key points.
  • step 140 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a label presentation unit run by the processor.
  • FIG. 2 shows the presentation effect in one embodiment of the interactive presentation method of three-dimensional labels of the present disclosure.
  • three-dimensional labels 111, 121, and 131 are used to represent the description information of three-dimensional models 110, 120, and 130 respectively.
  • the three-dimensional labels 111 and 121 are located above the marking points 112 and 122 respectively, and are linked to the corresponding three-dimensional model through lines, and the three-dimensional label 131 coincides with the marking point 132.
  • Figure 3 shows a schematic diagram of an application scenario of the interactive presentation method of three-dimensional tags of the present disclosure.
  • the three-dimensional scene shown in Figure 3(a) includes a three-dimensional model 310 and a three-dimensional model 320.
  • the three-dimensional model 310 corresponds to four Reference points (respectively 311, 312, 313, 314) and 4 reference planes (respectively 315, 316, 317, 318)
  • the three-dimensional model 320 corresponds to 4 reference points (respectively 321, 322, 323, 324) and 4 reference planes (325, 326, 327, 328 respectively).
  • the three-dimensional model 310 (for example, it can be a three-dimensional model representing a sofa) is located within the user's perspective and is therefore visible to the user, while the three-dimensional model 320 is invisible, so the execution subject (for example, the user's smartphone) may determine the three-dimensional model 310 as the target three-dimensional model.
  • the execution subject can first determine the distances between the four reference points and the point 300, and then determine the reference point 311 with the smallest distance as the mark point.
  • the reference plane 315 is the display plane.
  • the execution subject can determine the spatial pose of the three-dimensional label based on the marker point 311 and the display plane 315, and present it in the three-dimensional scene.
  • the marked three-dimensional model 310 is shown in Figure 3(b), in which the three-dimensional label 330 coincides with the display plane 315, and the corner point of the three-dimensional label 330 coincides with the marked point 311.
  • the three-dimensional model 320 is the target three-dimensional model.
  • the reference point 321 and the reference plane 325 can be determined as the mark point and the display plane respectively, thereby determining The spatial pose of the three-dimensional label corresponding to the three-dimensional model 320 is presented in the three-dimensional scene.
  • the interactive presentation method of 3D tags can determine at least one target 3D model from the 3D models visible to the user based on the user's point and perspective when browsing the 3D scene, and select the target 3D model that is closest to the point.
  • the reference point is used as a marker point, and the reference plane where the marker point is located is determined as the display plane; then the spatial pose of the three-dimensional label is determined based on the marker point and the display plane, and finally at least one target is presented in the three-dimensional scene based on the spatial pose of the three-dimensional label
  • the 3D label corresponding to the 3D model It can ensure the matching degree of the spatial pose and point position of the 3D label, allowing users to obtain the information of the 3D model more intuitively and conveniently, thereby improving the display effect of the 3D label in the 3D scene.
  • the marker point through step 120 when determining the marker point through step 120, if there are two or more reference points in the reference points corresponding to the target three-dimensional model that are closest to the point, two or more reference points are determined respectively.
  • the reference plane where two or more reference points are located is projected within the viewing angle, and the reference point included in the reference plane with the largest projected area is determined as the mark point.
  • the projection of the reference plane within the viewing angle can represent the presentation area of the reference plane within the viewing angle.
  • the larger the presentation area the easier it is for the user to check the information in the reference plane.
  • the projection area of the reference plane within the viewing angle will be smaller, that is, the reference plane's presentation area within the viewing angle will be smaller. For example, when the angle between the normal vector of the reference plane and the user's line of sight is 0° or At 180°, the projection area of the reference plane within the viewing angle is 0. If the plane on which the 3D label displays information is located in the reference plane, the user cannot view the information in the 3D label.
  • the reference plane faces the user. At this time, the projected area of the reference plane within the viewing angle reaches a maximum value. If the plane on which the three-dimensional label displays information is located in the reference plane, the user You can directly view the information in the three-dimensional label.
  • the reference point with better presentation effect can be selected based on the projected area of the reference plane within the viewing angle. Used as marking points to improve the display effect of three-dimensional labels.
  • the inventor found that when the height of the three-dimensional label is too high, the degree of spatial perspective will be greater. At this time, the visibility of the three-dimensional label and the readability of the text therein will decrease.
  • the display plane where the marker point is located is reduced along the height.
  • Figure 4(a) and Figure 4(b) show two presentation methods respectively.
  • Figure 4(a) shows the three-dimensional label before adjusting the height.
  • Label schematic diagram Figure 4(b) is a schematic diagram of the three-dimensional label after adjusting the height.
  • the preset height for example, it can be 1.6m
  • using the presentation method of Figure 4(a) will result in a greater degree of spatial perspective of the three-dimensional label 420, and its visibility and information The readability is correspondingly lower.
  • the presentation method of FIG. 4(b) the height of the three-dimensional label 420 in the three-dimensional scene can be reduced, correspondingly reducing its degree of spatial perspective, thereby achieving better visibility and readability.
  • the height of the mark point corresponding to the target 3D model in the 3D scene is greater than the preset height.
  • the information displayed by the 3D label is located in an area that is easier for the user's sight to observe, thereby obtaining better results. Good visibility and readability.
  • Figure 5 shows a flow chart of yet another embodiment of the interactive presentation method of three-dimensional labels of the present disclosure. As shown in Figure 5, after the above step 140, the method may also include the following steps:
  • Step 510 Determine the visible area in the three-dimensional scene based on the point position and perspective.
  • the visible area represents the area that can be observed by the user when browsing the three-dimensional scene from the current point of view at the current point, that is, the area presented to the user device.
  • the execution subject can construct a viewing cone of the three-dimensional scene based on the point position and perspective.
  • the area within the viewing cone space is the visible area of the three-dimensional scene.
  • step 510 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a region determination unit run by the processor.
  • Step 520 Obtain the projected image of the visible area in the user device.
  • the user device is an electronic device used by the user to browse the three-dimensional scene.
  • the projected image represents an image displayed in the screen of the user device. It is a projection image formed by projecting a three-dimensional scene onto the screen of the user device by an execution subject (for example, it can be a user device) combined with the display parameters of the user device (such as screen resolution).
  • step 520 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by an image acquisition determination unit run by the processor.
  • Step 530 Determine the pixel distance between corresponding mark points of at least one target three-dimensional model in the projection image.
  • the marker points corresponding to the target three-dimensional model 1 and the target three-dimensional model 2 are marker point a and marker point b respectively.
  • the projections of marker point a and marker point b in the projection image are pixel point A and pixel point B respectively, then the length of line segment AB is the pixel distance between marker point a and marker point b.
  • step 530 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a distance determination unit run by the processor.
  • Step 540 If the pixel distance between the mark points corresponding to the two target 3D models is less than the preset distance, hide the 3D label corresponding to one of the target 3D models.
  • the execution subject can randomly hide the 3D label corresponding to one of the target 3D models.
  • the execution subject can hide marked points further away from the point.
  • step 540 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a tag hiding unit run by the processor.
  • the embodiment shown in Figure 5 embodies the following: when the pixel distance between the mark points corresponding to the two target 3D models in the projected image of the user device is less than the preset distance, the 3D label corresponding to one of the target 3D models is hidden. It can avoid occlusion or overlap caused by three-dimensional labels that are too close, which helps to further improve the presentation effect of three-dimensional labels.
  • Figure 6 shows a flow chart of yet another embodiment of the interactive presentation method of three-dimensional labels of the present disclosure. As shown in Figure 6, based on the process shown in Figure 2 or Figure 5, step 140 The method may then further include the following steps:
  • Step 610 When the user changes the point and/or perspective while browsing the three-dimensional scene, determine a new target three-dimensional model based on the new point and/or new perspective.
  • the new target three-dimensional model represents the three-dimensional model to be marked that can be observed by the user after changing the point and/or perspective.
  • the new target three-dimensional model may include the old target three-dimensional model before changing the point position and/or perspective.
  • step 610 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by an update unit run by the processor.
  • Step 620 For the new target 3D model, perform the operation of determining the marker point and the display plane and the operation of determining the spatial position of the 3D label in the 3D scene again to obtain the spatial position of the 3D label corresponding to the new target 3D model in the 3D scene. posture.
  • the execution subject can execute the above steps 120 to 130 again for the new target 3D model, as well as the optional implementation methods corresponding to each step, to determine the spatial pose of the 3D tag corresponding to the travel target 3D model in the 3D scene. .
  • step 620 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by an iterative unit run by the processor.
  • Step 630 Based on the spatial pose of the three-dimensional label corresponding to the new target three-dimensional model in the three-dimensional scene, present the three-dimensional label corresponding to the new target three-dimensional model in the three-dimensional scene.
  • the execution subject can perform the above step 150 again to present the three-dimensional label corresponding to the new target three-dimensional model in the three-dimensional scene.
  • step 150 after the execution body performs step 150 again, it may also perform the above-mentioned steps 510 to 540 again to further improve the display effect of the three-dimensional label.
  • the execution subject can determine the three-dimensional model 320 as the new target three-dimensional model, and execute the above steps 120 to 140 again,
  • the three-dimensional label corresponding to the three-dimensional model 320 can be presented in the three-dimensional scene.
  • step 630 may be performed by the processor calling corresponding instructions stored in the memory, or may be performed by a presentation unit run by the processor.
  • the user when the user changes the point and/or the angle of view, the user can change the point and/or angle of view according to the new point and/or angle of view. Synchronously updating the 3D labels presented in the 3D scene can further improve the interaction intelligence and display effect between the 3D scene and the user.
  • step 630 when the user changes points and/or perspectives while browsing the three-dimensional scene, the three-dimensional labels corresponding to at least one target three-dimensional model are hidden.
  • Any interactive presentation method of three-dimensional labels provided by the embodiments of the present disclosure can be executed by any appropriate device with data processing capabilities, including but not limited to: terminal devices and servers.
  • any of the interactive presentation methods of three-dimensional labels provided in the embodiments of the present disclosure can be executed by the processor.
  • the processor executes the interactive presentation method of the three-dimensional labels mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in the memory. No further details will be given below.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.
  • FIG. 7 shows a schematic structural diagram of an embodiment of a three-dimensional label interactive presentation device of the present disclosure.
  • the device of this embodiment can be used to implement the above method embodiments of the present disclosure.
  • the device includes: a first determining unit 710 configured to determine at least one target three-dimensional model from the three-dimensional models visible to the user based on the user's point and perspective when browsing the three-dimensional scene.
  • the target three-dimensional scene The three-dimensional model corresponds to a three-dimensional label to be marked, a plurality of reference points for marking the three-dimensional label, and the reference plane where each reference point is located;
  • the second determination unit 720 is configured to determine the nearest point in the target three-dimensional model.
  • the spatial pose in the three-dimensional scene presents the three-dimensional label corresponding to the three-dimensional model in the three-dimensional scene.
  • the device further includes: an area determination unit configured to determine the visible area in the three-dimensional scene based on the point and perspective; an image acquisition unit configured to acquire the image of the visible area in the user equipment.
  • Projected image the user equipment is an electronic device used by the user to browse the three-dimensional scene;
  • the distance determination unit is configured to determine the pixel distance between the mark points respectively corresponding to at least one target three-dimensional model in the projected image;
  • the label hiding unit is configured If the pixel distance between the mark points corresponding to the two target 3D models is less than the preset distance, the 3D label corresponding to one of the target 3D models is hidden.
  • the label presentation unit 740 also includes an adjustment module configured to reduce the height along the height in the display plane where the marker point is located if the height of the marker point corresponding to the target three-dimensional model is greater than the preset height in the three-dimensional scene. Translate the three-dimensional label corresponding to the target three-dimensional model in a small direction to reduce the height of the three-dimensional label corresponding to the target three-dimensional model in the three-dimensional scene.
  • the first determination unit 710 also includes a screening module configured to determine if there are two or more reference points closest to the point position in the reference points corresponding to the target three-dimensional model. The projection of the reference plane where more than one reference point is located within the viewing angle, and the reference point included in the reference plane with the largest projected area is determined as the mark point.
  • the device further includes: an update unit configured to determine a new target 3D target based on the new point and/or new perspective when the user changes the point and/or perspective while browsing the three-dimensional scene. model; the iteration unit is configured to once again perform the operation of determining the marker point and display plane and the operation of determining the spatial pose of the three-dimensional label in the three-dimensional scene for the new target three-dimensional model, and obtain the three-dimensional label corresponding to the new target three-dimensional model in the three-dimensional scene.
  • the presentation unit is configured to present the three-dimensional label corresponding to the new target three-dimensional model in the three-dimensional scene based on the spatial pose of the three-dimensional label corresponding to the new target three-dimensional model in the three-dimensional scene.
  • the device further includes a hiding unit configured to hide the three-dimensional labels corresponding to at least one target three-dimensional model when the user changes points and/or perspectives while browsing the three-dimensional scene.
  • embodiments of the present disclosure also provide an electronic device, including:
  • Memory used to store computer programs
  • a processor configured to execute a computer program stored in the memory, and when the computer program is executed, implement the interactive presentation method of a three-dimensional label described in any of the above embodiments of the present disclosure.
  • embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored.
  • the computer program instructions are executed by a processor, the interactive presentation method of a three-dimensional label according to any of the above embodiments can be implemented.
  • an embodiment of the present disclosure also provides a computer program product, which includes computer program instructions.
  • the computer program instructions are executed by a processor, the interactive presentation method of a three-dimensional label according to any of the above embodiments of the present disclosure can be implemented.
  • FIG. 8 is a schematic structural diagram of an application embodiment of the electronic device of the present disclosure. Next, an electronic device according to an embodiment of the present disclosure is described with reference to FIG. 8 . As shown in Figure 8, the electronic device includes one or more processors and memory.
  • the processor may be a central processing unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
  • CPU central processing unit
  • the processor may control other components in the electronic device to perform desired functions.
  • Memory may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache).
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, etc.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor may execute the program instructions to implement the interactive presentation method of three-dimensional labels and/or the various embodiments of the present disclosure described above. or other desired functionality.
  • the electronic device may further include an input device and an output device, and these components are interconnected through a bus system and/or other forms of connection mechanisms (not shown).
  • the input device may also include, for example, a keyboard, a mouse, and the like.
  • the output device can output various information to the outside, including determined distance information, direction information, etc.
  • the output devices may include, for example, displays, speakers, printers, and communication networks and remote output devices to which they are connected, among others.
  • the electronic device may include any other suitable components depending on the specific application.
  • embodiments of the present disclosure may also be a computer program product, which includes computer program instructions that, when executed by a processor, cause the processor to perform the steps described in the above part of this specification. Steps in the interactive presentation method of three-dimensional labels according to various embodiments of the present disclosure.
  • the computer program product may be written with program code for performing operations of embodiments of the present disclosure in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc. , also includes conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • embodiments of the present disclosure may also be a computer-readable storage medium having computer program instructions stored thereon.
  • the computer program instructions when executed by a processor, cause the processor to perform the steps described in the above part of this specification according to the present invention. Steps in the interactive presentation method of three-dimensional labels of various embodiments are disclosed.
  • the computer-readable storage medium may be any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, for example, but is not limited to, electrical, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more wires, portable disk, hard disk, random access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or the above any suitable combination.
  • the aforementioned program can be stored in a computer-readable storage medium.
  • the program When the program is executed, It includes the steps of the above method embodiment; and the aforementioned storage medium includes: ROM, RAM, magnetic disk or optical disk and other various media that can store program codes.
  • the methods and apparatus of the present disclosure may be implemented in many ways.
  • the methods and devices of the present disclosure may be implemented through software, hardware, firmware, or any combination of software, hardware, and firmware.
  • the above order for the steps of the methods is for illustration only, and the steps of the methods of the present disclosure are not limited to the order specifically described above unless otherwise specifically stated.
  • the present disclosure may also be implemented as programs recorded in recording media, and these programs include machine-readable instructions for implementing methods according to the present disclosure.
  • the present disclosure also covers recording media storing programs for executing methods according to the present disclosure.
  • each component or each step can be decomposed and/or recombined. These decompositions and/or recombinations should be considered equivalent versions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例公开了一种三维标签的交互呈现方法、装置、电子设备、存储介质和计算机程序产品,其中,方法包括:基于用户浏览三维场景时的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型;将目标三维模型对应的、距离点位最近的参考点确定为该目标三维模型对应的标记点,并将该标记点所在的参考平面确定为该目标三维模型对应的展示平面,得到目标三维模型对应的标记点和展示平面;基于目标三维模型对应的标记点和展示平面,确定目标三维模型对应的三维标签在三维场景中的空间位姿;基于目标三维模型对应的三维标签在三维场景中的空间位姿,在三维场景中呈现三维模型对应的三维标签。

Description

三维标签的交互呈现方法、装置、设备、介质和程序产品
本公开要求在2022年04月22日提交中国专利局、申请号为CN202210427645.8、发明名称为“三维标签的交互呈现方法、装置、设备、介质和程序产品”的中国专利申请的优先权,其全部内容通过引用结合在本公开。
技术领域
本公开涉及计算机技术领域,尤其涉及一种三维标签的交互呈现方法、装置、电子设备、存储介质和计算机程序产品。
背景技术
随着计算机技术的发展,尤其是虚拟现实(Virtual Reality,VR)的迅速进步,使得三维场景在各个领域的应用越来越广泛。三维场景可以通过多源信息融合的、交互式的三维动态视景使用户沉浸到该环境中,使用户可以在不同的视角可以观看到不同的内容。
相关技术中,为了便于用户直观地获取三维场景中的展示对象的信息,可以将展示对象的信息以标签的形式呈现在三维场景中。
发明内容
本公开实施例提供了一种三维标签的交互呈现方法、装置、电子设备、存储介质和计算机程序产品,用于提高三维标签在三维场景中的展示效果。
根据本公开实施例的一个方面,提供了一种三维标签的交互呈现方法,所述方法包括:
基于用户浏览三维场景时用户的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型,所述目标三维模型对应有一个待标记的三维标签和多个用于标记三维标签的参考点以及每个所述参考点所在的参考平面;
将所述目标三维模型对应的、距离所述点位最近的参考点确定为所述目标三维模型对应的标记点,并将该标记点所在的参考平面确定为所述目标三维模型对应的展示平面,得到所述目标三维模型对应的标记点和展示平面;
基于所述目标三维模型对应的标记点和展示平面,确定所述目标三维模型对应的三维标签在所述三维场景中的空间位姿;
基于所述目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述目标三维模型对应的三维标签。
根据本公开实施例的另一个方面,提供了一种三维标签的交互呈现装置,所述装置包括第一确定单元,被配置成基于用户浏览三维场景时用户的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型,所述目标三维模型对应有一个待标记的三维标签和多个用于标记三维标签的参考点以及每个所述参考点所在的参考平面;
第二确定单元,被配置成将所述目标三维模型对应的、距离所述点位最近的参考点确定为所述目标三维模型对应的标记点,并将该标记点所在的参考平面确定为所述目标三维模型对应的展示平面,得到所述目标三维模型对应的标记点和展示平面;
位姿确定单元,被配置成基于所述目标三维模型对应的标记点和展示平面,确定所述目标三维模型对应的三维标签在所述三维场景中的空间位姿;
标签呈现单元,被配置成基于所述目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述三维模型对应的三维标签。
根据本公开实施例的又一个方面,提供了一种电子设备,包括:存储器,用于存储计算机程序产品;
处理器,用于执行所述存储器中存储的计算机程序产品,且所述计算机程序产品被执行时,实现上述本公开实施例中任意一项提供的三维标签的交互呈现方法。
根据本公开实施例的再一个方面,提供了一种计算机可读存储介质,其上存储有程序代码,所述程序代码可被处理器调用执行以实现本公开上述实施例中任意一项提供的三维标签的交互呈现方法。
根据本公开实施例的再一个方面,提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令被处理器执行时,实现本公开上述实施例中任意一项提供的三维标签的交互呈现方法。
本公开实施例提供的方案中,可以根据用户浏览三维场景时的点位和视角,从三维场景中的三维模型中确定出用户可见的目标三维模型,将目标三维模型中距离点位最近的参考点作为标记点,将标记点所在的参考平面确定为展示平面;然后根据标记点和展示平面确定三维标签的空间位姿,最后根据三维标签的空间位姿,在三维场景中呈现至少一个目标三维模型对应的三维标签。可以确保三维标签的空间位姿与点位的匹配程度,使用户可以更直观、更便捷地获取三维模型的信息,从而提高三维标签在三维场景中的展示效果。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
构成说明书的一部分的附图描述了本公开的实施例,并且连同描述一起用于解释本公开的原理。
参照附图,根据下面的详细描述,可以更加清楚地理解本公开。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图:
图1为本公开的三维标签的交互呈现方法的一个实施例的流程示意图;
图2为本公开的三维标签的交互呈现方法的一个实施例中三维标签的呈现方式的示意图;
图3为本公开的三维标签的交互呈现方法的一个实施例的场景示意图;
图4为本公开的三维标签的交互呈现方法的一个实施例中的三维标签的标记位置的示意图;
图5为本公开的三维标签的交互呈现方法的又一个实施例的流程示意图;
图6为本公开的三维标签的交互呈现方法的又一个实施例的流程示意图;
图7为本公开的三维标签的交互呈现装置的一个实施例的结构示意图;
图8为本公开电子设备一个应用实施例的结构示意图。
具体实施方式
现在将参照附图来详细描述本公开的各种示例性实施例。应注意到:除非另外说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者 在前后文给出相反启示的情况下,一般可以理解为一个或多个。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
另外,公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统﹑大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
为了使本公开实施例中的技术方案及优点更加清楚明白,以下结合附图对本公开的示例性实施例进一步的说明,显然,所描述的实施例仅是本公开的一部分实施例,而不是所有实施例的穷举。需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。
下面结合图1对本公开的三维标签的交互呈现方法进行示例性说明。图1示出了本公开的三维标签的交互呈现方法的一个实施例的流程图,如图1所示,该流程包括以下步骤:
步骤110、基于用户浏览三维场景时用户的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型。
其中,目标三维模型对应有一个待标记的三维标签和多个用于标记三维标签的参考点以及每个参考点所在的参考平面。
在本实施中,目标三维模型表示用户可见的、待标记的三维模型。在构建三维场景时,可以根据需求在三维场景中选取部分或全部三维模型作为待标记的三维模型,并为每个待标记的三维模型生成三维标签。当用户浏览三维场景时,可以根据点位和视角确定出用户在三维场景中的可视区域,位于可视区域内的、待标记的三维模型即为目标三维模型。
待标记的三维标签表示尚未在三维场景中呈现的三维标签,是预先根据三维模型的描述信息生成的具备三维特性(例如符合透视规律)的标签,用于展示三维模型的描述信息,例如可以包括三维模型的名称、尺寸、材质、价格等。
目标三维模型对应的参考点及其所在的参考平面,可以预先根据目标三维模型的空间形状以及目标三维模型在三维场景中的位姿信息确定,其中,参考点表示可以标记三维标签的位置,参考平面则用于约束三维标签的空间姿态。作为示例,可以采用三维场景中的空间坐标表征参考点,同时可以利用参考平面的四个角点在三维场景中的空间坐标以及参 考平面的法向量表征参考平面。
在本实施例的一个可选的实施方式中,参考点可以位于目标三维模型的表面,参考平面与构成目标三维模型的包围盒的平面垂直或平行。例如,每个三维模型可以包括6个参考平面,每个参考平面分别与构成包围盒的两个平面平行。
实践中,服务商可以利用三维场景(例如VR场景)模拟真实场景,以场景的形式向用户展示多个对象(例如可以是家具等物品)。通常,在构建三维场景时,可以根据需求设置多个点位,然后基于点位和相机参数构建视锥,通过视锥确定用户在浏览三维场景时的视野。之后可以将根据展示对象的外观和属性构建的三维模型按照预设位姿放置在三维场景中,并将构建完成的三维场景形成Json文件。用户可以通过电子设备(例如终端电脑或智能手机等)从服务商获取三维场景的Json文件,并通过电子设备中预先装载的三维应用软件对Json文件进行解析,以在电子设备上呈现三维场景。
在一个具体的示例中,执行主体可以是用户浏览三维场景时所使用的电子设备,例如可以是终端电脑或智能手机等。用户在利用电子设备浏览三维场景时,执行主体可以根据用户的点位和视角,确定出三维场景中的可视区域,然后将可视区域内的、待标记的三维模型确定为目标三维模型。
在一个可选示例中,该步骤110可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第一确定单元执行。
步骤120、将目标三维模型对应的、距离点位最近的参考点确定为该目标三维模型对应的标记点,并将该标记点所在的参考平面确定为该目标三维模型对应的展示平面,得到目标三维模型对应的标记点和展示平面。
在本实施例中,标记点表示三维标签的标记位置,展示平面则用于约束三维标签标记到标记点时所呈现的空间姿态。
作为示例,执行主体可以分别遍历目标三维模型对应的至少一个参考点,根据参考点的空间坐标与点位的空间坐标,确定出至少一个参考点与点位之间的距离,然后从至少一个参考点中确定出距离最小的参考点,作为该目标三维模型对应的标记点,并将标记点所在的平面作为展示平面。通过对至少一个目标三维模型执行以上操作,确定出每个目标三维模型对应的标记点和展示平面。
在一个可选示例中,该步骤120可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的第二确定单元执行。
步骤130、基于目标三维模型对应的标记点和展示平面,确定目标三维模型对应的三维标签在三维场景中的空间位姿。
作为示例,执行主体可以首先根据标记点确定三维标签的标记位置,例如可以是将三维标签的附着点与标记点沿特定方向对齐或重合;然后,根据展示平面确定三维标签的空间姿态,例如可以将三维标签的长度方向与展示平面的长度方向或宽度方向平行,将三维标签用于展示标签信息的平面与展示平面重合,并根据展示平面的法线方向确定三维标签用于展示标签信息的平面的朝向,以继承展示平面在三维场景中的空间特性,例如可以是近大远小的透视特性。如此一来,可以确定三维标签在三维场景中的空间位姿。
在一个可选示例中,该步骤130可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的位姿确定单元执行。
步骤140、基于目标三维模型对应的三维标签在三维场景中的空间位姿,在三维场景中呈现三维模型对应的三维标签。
作为示例,执行主体可以利用CSS(Cascading Style Sheet,层叠样式表)中的transform应用,根据目标三维模型对应的标记点和展示平面的空间位置确定目标三维模型对应的三维矩阵;然后调用以下一个或多个函数:Rotate3d()、translate3d()、perspective()等,利用三维矩阵对目标三维模型对应的三维标签进行3D旋转、平移或透 视等组合操作,将三维标签以步骤130中确定出的空间姿态标记在标记点,实现三维标签在三维场景中的呈现。
三维标签的标记方式例如可以是将三维标签的附着点与标记点重合,或者,三维标签还可以沿预设方向与标记点对齐,然后通过连线链接至目标三维模型。同一个三维场景中的不同三维模型可以采用多种标记方式。附着点表示三维标签中用于定位三维标签的标记位置的点,例如可以是三维标签的角点或中心点或其他具有代表性的关键点。
在一个可选示例中,该步骤140可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的标签呈现单元执行。
进一步结合图2对三维标签的标记方式进行示例性说明。图2示出了本公开的三维标签的交互呈现方法的一个实施例中的呈现效果,如图2所示,三维标签111、121、131分别用于表示三维模型110、120、130的描述信息,其中,三维标签111、121分别位于标记点112、122的上方,并通过连线链接至对应的三维模型上,三维标签131则与标记点132重合。
下面进一步结合图3对本实施例中的三维标签的交互呈现方法进行示例性说明。图3示出了本公开的三维标签的交互呈现方法的一个应用场景示意图,如图3(a)所示的三维场景中包括三维模型310和三维模型320,其中,三维模型310对应有4个参考点(分别是311、312、313、314)和4个参考平面(分别是315、316、317、318),三维模型320对应有4个参考点(分别是321、322、323、324)和4个参考平面(分别是325、326、327、328)。
当用户采用点位300浏览三维场景时,三维模型310(例如可以是表示沙发的三维模型)位于用户的视角内,因而对于用户是可见的,三维模型320则是不可见的,因此执行主体(例如可以是用户的智能手机)可以将三维模型310确定为目标三维模型。执行主体可以首先确定4个参考点与点位300之间的距离,然后将距离最小的参考点311确定为标记点,相应的,参考平面315为展示平面。之后,执行主体可以根据标记点311和展示平面315确定三维标签的空间位姿,并以此呈现在三维场景中。标记后的三维模型310如图3(b)所示,其中三维标签330与展示平面315重合,且三维标签330的角点与标记点311重合。
同理,当用户采用点位340浏览三维场景时,三维模型320则为目标三维模型,经过同样的处理过程,可以将参考点321和参考平面325分别确定为标记点和展示平面,以此确定三维模型320对应的三维标签的空间位姿并呈现在三维场景中。
本实施例提供的三维标签的交互呈现方法,可以根据用户浏览三维场景时的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型,将目标三维模型中距离点位最近的参考点作为标记点,将标记点所在的参考平面确定为展示平面;然后根据标记点和展示平面确定三维标签的空间位姿,最后根据三维标签的空间位姿,在三维场景中呈现至少一个目标三维模型对应的三维标签。可以确保三维标签的空间位姿与点位的匹配程度,使用户可以更直观、更便捷地获取三维模型的信息,从而提高三维标签在三维场景中的展示效果。
在本实施例的一些可选的实施方式中,通过步骤120确定标记点时,若目标三维模型对应的参考点中同时存在两个或两个以上参考点距离点位最近,分别确定两个或两个以上参考点所在的参考平面在视角内的投影,并将投影面积最大的参考平面中包括的参考点确定为标记点。
在本实施方式中,参考平面在视角内的投影可以表征参考平面在视角内的呈现区域,呈现区域越大,则用户越容易查参考平面中的信息。根据透视原理可知,当参考平面的法向量与用户视线的夹角较小或较大时,均会导致参考平面在视角内的投影面积较小,即参考平面在视角内的呈现区域较小。例如,当参考平面的法向量与用户视线的夹角为0°或 180°时,参考平面在视角内的投影面积为0,若三维标签展示信息的平面位于该参考平面中,则用户无法查看到三维标签中的信息。
反之,当参考平面的法向量垂直于用户视线时,参考平面面对用户,此时参考平面在视角内的投影面积达到极大值,若三维标签展示信息的平面位于该参考平面中,则用户可以直接查看到三维标签中的信息。
在本实施方式中,当目标三维模型对应的参考点中同时存在两个或两个以上参考点距离点位最近时,通过参考平面在视角内的投影面积,可以选取呈现效果更佳的参考点作为标记点,从而提升三维标签的展示效果。
在实现本公开的过程中,发明人发现,当三维标签的高度过高时,空间透视程度也会较大,此时三维标签的可视性和其中文字的可读性均会下降。
考虑到这种情况,在本实施例的一些可选的实施方式中,若目标三维模型对应的标记点在三维场景中的高度大于预设高度,在该标记点所在的展示平面内沿高度减小的方向平移目标三维模型对应的三维标签,以降低目标三维模型对应的三维标签在三维场景中的高度。
下面结合图4对本实施方式中的三维标签的呈现方式进行示例性说明,图4(a)和图4(b)分别示出了两种呈现方式,图4(a)为调整高度前的三维标签示意图,图4(b)为调整高度后的三维标签示意图。当三维模型410对应的标记点411高于预设高度(例如可以是1.6m)时,采用图4(a)的呈现方式会导致三维标签420的空间透视程度较大,其可视性和信息的可读性也相应地较低。采用图4(b)的呈现方式,则可以降低三维标签420在三维场景中的高度,相应地降低了其空间透视程度,从而可以获得更好的可视性和可读性。
在本实施方式中,目标三维模型对应的标记点在三维场景中的高度大于预设高度,通过降低三维标签的高度,使得三维标签展示的信息位于用户视线更容易观察的区域内,从而获得更好的可视性和可读性。
接着参考图5,图5示出了本公开的三维标签的交互呈现方法的又一个实施例的流程图,如图5所示,在上述步骤140之后,该方法还可以包括以下步骤:
步骤510、基于点位和视角,确定三维场景中的可视区域。
在本实施例中,可视区域表示用户在当前所处的点位以当前视角浏览三维场景时,可以观察到的区域,即呈现至用户设备的区域。
作为示例,执行主体可以根据点位和视角,构建三维场景的视锥,位于视锥空间内的区域即为三维场景的可视区域。
在一个可选示例中,该步骤510可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的区域确定单元执行。
步骤520、获取可视区域在用户设备中的投影图像。
其中,用户设备为用户浏览三维场景所使用的电子设备。
在本实施例中,投影图像表示用户设备的屏幕中显示的图像。是由执行主体(例如可以是用户设备)结合用户设备的显示参数(例如屏幕分辨率),将三维场景投影到用户设备的屏幕,形成的投影图像。
在一个可选示例中,该步骤520可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的图像获取确定单元执行。
步骤530、在投影图像中确定至少一个目标三维模型分别对应的标记点之间的像素距离。
作为示例,目标三维模型1和目标三维模型2对应的标记点分别为标记点a和标记点b。标记点a和标记点b在投影图像中的投影分别是像素点A和像素点B,则线段AB的长度即为标记点a和标记点b的像素距离。
在一个可选示例中,该步骤530可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的距离确定单元执行。
步骤540、若两个目标三维模型分别对应的标记点之间的像素距离小于预设距离,则隐藏其中一个目标三维模型对应的三维标签。
例如,两个目标三维模型分别对应的标记点之间的像素距离小于预设距离时,执行主体可以随机隐藏其中一个目标三维模型对应的三维标签。或者,执行主体可以将距离点位更远的标记点隐藏。
在一个可选示例中,该步骤540可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的标签隐藏单元执行。
图5所示的实施例体现了:当两个目标三维模型分别对应的标记点在用户设备的投影图像中的像素距离小于预设距离时,隐藏其中一个目标三维模型对应的三维标签的。可以避免距离过近的三维标签造成遮挡或重叠,有助于进一步提高三维标签的呈现效果。
接着参考图6,图6示出了本公开的三维标签的交互呈现方法的又一个实施例的流程图,如图6所示,在图2或图5所示的流程的基础上,步骤140之后,该方法还可以进一步包括以下步骤:
步骤610、当用户在浏览三维场景的过程中更换点位和/或视角时,基于新点位和/或新视角,确定用新目标三维模型。
通常,用户在浏览三维场景时,为了更全面地查看三维场景,可以更换点位和视角,以便从多个位置和多个角度获取三维场景中的信息。当用户更换点位和/或视角时,三维场景中的可视区域以及可见的三维模型通常也会随之变换。
在本实施例中,新目标三维模型表示用户更换点位和/或视角后,可以观察到的、待标记的三维模型。
可以理解的是,新目标三维模型可以包括更换点位和/或视角之前的旧目标三维模型。
在一个可选示例中,该步骤610可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的更新单元执行。
步骤620、针对新目标三维模型,再次执行确定标记点和展示平面的操作以及确定三维标签在三维场景中的空间位姿的操作,得到新目标三维模型对应的三维标签在三维场景中的空间位姿。
例如,执行主体可以针对新目标三维模型,再次执行上述步骤120至步骤130,以及每个步骤对应的可选的实施方式,以确定出行目标三维模型对应的三维标签在三维场景中的空间位姿。
在一个可选示例中,该步骤620可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的迭代单元执行。
步骤630、基于新目标三维模型对应的三维标签在三维场景中的空间位姿,在三维场景中呈现新目标三维模型对应的三维标签。
例如,执行主体可以再次执行上述步骤150在三维场景中呈现新目标三维模型对应的三维标签。
再例如,执行主体再次执行步骤150之后,还可以再次执行上述步骤510至步骤540,以进一步提高三维标签的展示效果。
进一步结合图3进行示例性说明,当用户在浏览过程中从点位300更换到点位340时,执行主体可以将三维模型320确定为新目标三维模型,再次执行上述步骤120至步骤140之后,可以在三维场景中呈现三维模型320对应的三维标签。
在一个可选示例中,该步骤630可以由处理器调用存储器存储的相应指令执行,也可以由被处理器运行的呈现单元执行。
在图6所示的实施例中,当用户更换点位和/或视角时,可以根据新点位和/或新视角, 同步更新三维场景中呈现的三维标签,可以进一步提高三维场景与用户的交互智能程度和展示效果。
在本实施例的一些可选的实施方式中,在步骤630之前,当用户在浏览三维场景的过程中更换点位和/或视角时,隐藏至少一个目标三维模型分别对应的三维标签。
通过隐藏旧三维标签,一方面可以降低浏览三维场景时的内存消耗,另一方面,可以避免旧三维标签与新三维标签的冲突。
本公开实施例提供的任一种三维标签的交互呈现方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种三维标签的交互呈现方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的三维标签的交互呈现方法。下文不再赘述。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
下面参考图7,图7示出了本公开的一种三维标签的交互呈现装置的一个实施例的结构示意图。该实施例的装置可用于实现本公开上述各方法实施例。如图7所示,该装置包括:第一确定单元710,被配置成基于用户浏览三维场景时用户的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型,目标三维场景三维模型对应有一个待标记的三维标签和多个用于标记三维标签的参考点以及每个参考点所在的参考平面;第二确定单元720,被配置成将目标三维模型中距离点位最近的参考点确定为该目标三维模型对应的标记点,并将该标记点所在的参考平面确定为该目标三维模型对应的展示平面,得到目标三维模型对应的标记点和展示平面;位姿确定单元730,被配置成基于目标三维模型对应的标记点和展示平面,确定目标三维模型对应的三维标签在三维场景中的空间位姿;标签呈现单元740,被配置成基于目标三维模型对应的三维标签在三维场景中的空间位姿,在三维场景中呈现三维模型对应的三维标签。
在其中一个实施方式中,该装置还包括:区域确定单元,被配置成基于点位和视角,确定三维场景中的可视区域;图像获取单元,被配置成获取可视区域在用户设备中的投影图像,用户设备为用户浏览三维场景所使用的电子设备;距离确定单元,被配置成在投影图像中确定至少一个目标三维模型分别对应的标记点之间的像素距离;标签隐藏单元,被配置成若两个目标三维模型分别对应的标记点之间的像素距离小于预设距离,则隐藏其中一个目标三维模型对应的三维标签。
在其中一个实施方式中,标签呈现单元740还包括调整模块,被配置成若目标三维模型对应的标记点在三维场景中的高度大于预设高度,在该标记点所在的展示平面内沿高度减小的方向平移目标三维模型对应的三维标签,以降低目标三维模型对应的三维标签在三维场景中的高度。
在其中一个实施方式中,第一确定单元710还包括筛选模块,被配置成若目标三维模型对应的参考点中同时存在两个或两个以上参考点距离点位最近,分别确定两个或两个以上参考点所在的参考平面在视角内的投影,并将投影面积最大的参考平面中包括的参考点确定为标记点。
在其中一个实施方式中,该装置还包括:更新单元,被配置成当用户在浏览三维场景的过程中更换点位和/或视角时,基于新点位和/或新视角,确定新目标三维模型;迭代单元,被配置成针对新目标三维模型,再次执行确定标记点和展示平面的操作以及确定三维标签在三维场景中的空间位姿的操作,得到新目标三维模型对应的三维标签在三维场景中的空间位姿;呈现单元,被配置成基于新目标三维模型对应的三维标签在三维场景中的空间位姿,在三维场景中呈现新目标三维模型对应的三维标签。
在其中一个实施方式中,该装置还包括隐藏单元,被配置成当用户在浏览三维场景的过程中更换点位和/或视角时,隐藏至少一个目标三维模型分别对应的三维标签。
另外,本公开实施例还提供了一种电子设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述存储器中存储的计算机程序,且所述计算机程序被执行时,实现本公开上述任一实施例所述的三维标签的交互呈现方法。
另外,本公开实施例还提供了一种计算机可读存储介质,其上存储有计算机程序指令,该计算机程序指令被处理器执行时,可以实现上述任一实施例的三维标签的交互呈现方法。
另外,本公开实施例还提供了一种计算机程序产品,包括计算机程序指令,该计算机程序指令被处理器执行时,可以实现本公开上述任一实施例的三维标签的交互呈现方法。
图8为本公开电子设备一个应用实施例的结构示意图。下面,参考图8来描述根据本公开实施例的电子设备。如图8所示,电子设备包括一个或多个处理器和存储器。
处理器可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备中的其他组件以执行期望的功能。
存储器可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器可以运行所述程序指令,以实现上文所述的本公开的各个实施例的三维标签的交互呈现方法以及/或者其他期望的功能。
在一个示例中,电子设备还可以包括:输入装置和输出装置,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
此外,该输入设备还可以包括例如键盘、鼠标等等。
该输出装置可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出设备可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图8中仅示出了该电子设备中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备还可以包括任何其他适当的组件。
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述部分中描述的根据本公开各种实施例的三维标签的交互呈现方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述部分中描述的根据本公开各种实施例的三维标签的交互呈现方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取 存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (15)

  1. 一种三维标签的交互呈现方法,其特征在于,包括:
    基于用户浏览三维场景时用户的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型,所述目标三维模型对应有一个待标记的三维标签和多个用于标记三维标签的参考点以及每个所述参考点所在的参考平面;
    将所述目标三维模型对应的、距离所述点位最近的参考点确定为所述目标三维模型对应的标记点,并将该标记点所在的参考平面确定为所述目标三维模型对应的展示平面,得到所述目标三维模型对应的标记点和展示平面;
    基于所述目标三维模型对应的标记点和展示平面,确定所述目标三维模型对应的三维标签在所述三维场景中的空间位姿;
    基于所述目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述目标三维模型对应的三维标签。
  2. 根据权利要求1所述的方法,其特征在于,基于所述目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述目标三维模型对应的三维标签之后,所述方法还包括:
    基于所述点位和所述视角,确定所述三维场景中的可视区域;
    获取所述可视区域在用户设备中的投影图像,所述用户设备为用户浏览所述三维场景所使用的电子设备;
    在所述投影图像中确定至少一个所述目标三维模型分别对应的标记点之间的像素距离;
    若两个所述目标三维模型分别对应的标记点之间的像素距离小于预设距离,则隐藏其中一个所述目标三维模型对应的三维标签。
  3. 根据权利要求1所述的方法,其特征在于,基于所述目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述目标三维模型对应的三维标签,包括:
    若所述目标三维模型对应的标记点在所述三维场景中的高度大于预设高度,在该标记点所在的展示平面内沿高度减小的方向平移所述目标三维模型对应的三维标签,以降低所述目标三维模型对应的三维标签在所述三维场景中的高度。
  4. 根据权利要求1所述的方法,其特征在于,将所述目标三维模型对应的、距离所述点位最近的参考点确定为所述目标三维模型对应的标记点,包括:
    若所述目标三维模型对应的参考点中同时存在两个或两个以上参考点距离所述点位最近,分别确定所述两个或两个以上参考点所在的参考平面在所述视角内的投影,并将投影面积最大的参考平面中包括的参考点确定为标记点。
  5. 根据权利要求1至4之一所述的方法,其特征在于,基于所述目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述目标三维模型对应的三维标签之后,所述方法还包括:
    当用户在浏览所述三维场景的过程中更换点位和/或视角时,基于新点位和/或新视角,确定新目标三维模型;
    针对所述新目标三维模型,再次执行所述确定标记点和展示平面的操作以及所述确定三维标签在所述三维场景中的空间位姿的操作,得到所述新目标三维模型对应的三维标签在所述三维场景中的空间位姿;
    基于所述新目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述新目标三维模型对应的三维标签。
  6. 根据权利要求5所述的方法,其特征在于,基于所述新目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述新三维模型对应的三维标签之前,所述方法还包括:
    当用户在浏览所述三维场景的过程中更换点位和/或视角时,隐藏至少一个所述目标三维模型分别对应的三维标签。
  7. 一种三维标签的交互呈现装置,其特征在于,包括:
    第一确定单元,被配置成基于用户浏览三维场景时用户的点位和视角,从用户可见的三维模型中确定出至少一个目标三维模型,所述目标三维模型对应有一个待标记的三维标签和多个用于标记三维标签的参考点以及每个所述参考点所在的参考平面;
    第二确定单元,被配置成将所述目标三维模型对应的、距离所述点位最近的参考点确定为所述目标三维模型对应的标记点,并将该标记点所在的参考平面确定为所述目标三维模型对应的展示平面,得到所述目标三维模型对应的标记点和展示平面;
    位姿确定单元,被配置成基于所述目标三维模型对应的标记点和展示平面,确定所述目标三维模型对应的三维标签在所述三维场景中的空间位姿;
    标签呈现单元,被配置成基于所述目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述三维模型对应的三维标签。
  8. 根据权利要求7所述的装置,其特征在于,所述装置还包括:
    区域确定单元,被配置成基于所述点位和所述视角,确定所述三维场景中的可视区域;
    图像获取单元,被配置成获取所述可视区域在用户设备中的投影图像,所述用户设备为用户浏览所述三维场景所使用的电子设备;
    距离确定单元,被配置成在所述投影图像中确定至少一个所述目标三维模型分别对应的标记点之间的像素距离;
    标签隐藏单元,被配置成若两个所述目标三维模型分别对应的标记点之间的像素距离小于预设距离,则隐藏其中一个所述目标三维模型对应的三维标签。
  9. 根据权利要求7所述的装置,其特征在于,所述标签呈现单元还包括调整模块,被配置成:若所述目标三维模型对应的标记点在所述三维场景中的高度大于预设高度,在该标记点所在的展示平面内沿高度减小的方向平移所述目标三维模型对应的三维标签,以降低所述目标三维模型对应的三维标签在所述三维场景中的高度。
  10. 根据权利要求7所述的装置,其特征在于,所述第一确定单元还包括筛选模块,被配置成:若所述目标三维模型对应的参考点中同时存在两个或两个以上参考点距离所述点位最近,分别确定所述两个或两个以上参考点所在的参考平面在所述视角内的投影,并将投影面积最大的参考平面中包括的参考点确定为标记点。
  11. 根据权利要求7至10之一所述的装置,其特征在于,所述装置还包括:更新单元,被配置成:当用户在浏览所述三维场景的过程中更换点位和/或视角时,基于新点位和/或新视角,确定新目标三维模型;针对所述新目标三维模型,再次执行所述确定标记点和展示平面的操作以及所述确定三维标签在所述三维场景中的空间位姿的操作,得到所述新目标三 维模型对应的三维标签在所述三维场景中的空间位姿;基于所述新目标三维模型对应的三维标签在所述三维场景中的空间位姿,在所述三维场景中呈现所述新目标三维模型对应的三维标签。
  12. 根据权利要求11所述的装置,其特征在于,所述装置还包括隐藏单元,被配置成:当用户在浏览所述三维场景的过程中更换点位和/或视角时,隐藏至少一个所述目标三维模型分别对应的三维标签。
  13. 一种电子设备,其特征在于,包括:
    存储器,用于存储计算机程序产品;
    处理器,用于执行所述存储器中存储的计算机程序产品,且所述计算机程序产品被执行时,实现上述权利要求1-6任一所述的方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,该计算机程序指令被处理器执行时,实现上述权利要求1-6任一所述的方法。
  15. 一种计算机程序产品,包括计算机程序指令,其特征在于,该计算机程序指令被处理器执行时,实现上述权利要求1-6之一所述的方法。
PCT/CN2023/085213 2022-04-22 2023-03-30 三维标签的交互呈现方法、装置、设备、介质和程序产品 WO2023202349A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210427645.8A CN114842175B (zh) 2022-04-22 2022-04-22 三维标签的交互呈现方法、装置、设备和介质
CN202210427645.8 2022-04-22

Publications (1)

Publication Number Publication Date
WO2023202349A1 true WO2023202349A1 (zh) 2023-10-26

Family

ID=82565794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/085213 WO2023202349A1 (zh) 2022-04-22 2023-03-30 三维标签的交互呈现方法、装置、设备、介质和程序产品

Country Status (2)

Country Link
CN (1) CN114842175B (zh)
WO (1) WO2023202349A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114842175B (zh) * 2022-04-22 2023-03-24 如你所视(北京)科技有限公司 三维标签的交互呈现方法、装置、设备和介质
CN115761122B (zh) * 2022-11-11 2023-07-14 贝壳找房(北京)科技有限公司 三维辅助尺的实现方法、装置、设备和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825551A (zh) * 2016-03-11 2016-08-03 广州视睿电子科技有限公司 三维标签实现方法和装置
US20180143756A1 (en) * 2012-06-22 2018-05-24 Matterport, Inc. Defining, displaying and interacting with tags in a three-dimensional model
US20180342088A1 (en) * 2017-05-24 2018-11-29 Diehl Aerospace Gmbh Method for producing a 2d image of a 3d surface
CN110321048A (zh) * 2018-03-30 2019-10-11 阿里巴巴集团控股有限公司 三维全景场景信息处理、交互方法及装置
CN113610993A (zh) * 2021-08-05 2021-11-05 南京师范大学 一种基于候选标签评估的3d地图建筑物标注方法
CN114140528A (zh) * 2021-11-23 2022-03-04 北京市商汤科技开发有限公司 数据标注方法、装置、计算机设备及存储介质
CN114842175A (zh) * 2022-04-22 2022-08-02 如你所视(北京)科技有限公司 三维标签的交互呈现方法、装置、设备、介质和程序产品

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521852B (zh) * 2011-11-24 2015-03-25 中国船舶重工集团公司第七0九研究所 一种独立于三维场景空间的目标标签表现方法
US9530239B2 (en) * 2013-11-14 2016-12-27 Microsoft Technology Licensing, Llc Maintaining 3D labels as stable objects in 3D world
US10751548B2 (en) * 2017-07-28 2020-08-25 Elekta, Inc. Automated image segmentation using DCNN such as for radiation therapy
CN111047717A (zh) * 2019-12-24 2020-04-21 北京法之运科技有限公司 一种对三维模型进行文字标注的方法
CN113781628B (zh) * 2020-11-26 2024-10-18 北京沃东天骏信息技术有限公司 一种三维场景搭建方法和装置
CN112907760B (zh) * 2021-02-09 2023-03-24 浙江商汤科技开发有限公司 三维对象的标注方法及装置、工具、电子设备和存储介质
CN113048980B (zh) * 2021-03-11 2023-03-14 浙江商汤科技开发有限公司 位姿优化方法、装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180143756A1 (en) * 2012-06-22 2018-05-24 Matterport, Inc. Defining, displaying and interacting with tags in a three-dimensional model
CN105825551A (zh) * 2016-03-11 2016-08-03 广州视睿电子科技有限公司 三维标签实现方法和装置
US20180342088A1 (en) * 2017-05-24 2018-11-29 Diehl Aerospace Gmbh Method for producing a 2d image of a 3d surface
CN110321048A (zh) * 2018-03-30 2019-10-11 阿里巴巴集团控股有限公司 三维全景场景信息处理、交互方法及装置
CN113610993A (zh) * 2021-08-05 2021-11-05 南京师范大学 一种基于候选标签评估的3d地图建筑物标注方法
CN114140528A (zh) * 2021-11-23 2022-03-04 北京市商汤科技开发有限公司 数据标注方法、装置、计算机设备及存储介质
CN114842175A (zh) * 2022-04-22 2022-08-02 如你所视(北京)科技有限公司 三维标签的交互呈现方法、装置、设备、介质和程序产品

Also Published As

Publication number Publication date
CN114842175A (zh) 2022-08-02
CN114842175B (zh) 2023-03-24

Similar Documents

Publication Publication Date Title
US12120471B2 (en) System and method for interactive projection
US10755485B2 (en) Augmented reality product preview
WO2023202349A1 (zh) 三维标签的交互呈现方法、装置、设备、介质和程序产品
US11636660B2 (en) Object creation with physical manipulation
US10825234B2 (en) Previewing 3D content using incomplete original model data
CN107357503B (zh) 一种工业装备三维模型的自适应展示方法及系统
US10424009B1 (en) Shopping experience using multiple computing devices
CN114758075B (zh) 用于生成三维标签的方法、装置和存储介质
WO2023241065A1 (zh) 用于图像逆渲染的方法、装置、设备和介质
WO2023246189A1 (zh) 图像信息显示方法和装置
US20230175858A1 (en) Three-dimensional path display method, device, readable storage medium and electronic apparatus
CN111562845B (zh) 用于实现三维空间场景互动的方法、装置和设备
WO2023197657A1 (zh) 用于处理vr场景的方法、装置和计算机程序产品
WO2023098915A1 (zh) 三维房屋模型中的内容展示方法及装置
CN115063564B (zh) 用于二维显示图像中的物品标签展示方法、装置及介质
CN115512046B (zh) 模型外点位的全景图展示方法和装置、设备、介质
CN115454255B (zh) 物品展示的切换方法和装置、电子设备、存储介质
CN115455552A (zh) 模型的编辑方法和装置、电子设备、存储介质、产品
CN116594531A (zh) 物体展示方法、装置、电子设备和存储介质
CN117611763A (zh) 一种建筑群模型的生成方法、装置、介质及设备
US20170186233A1 (en) Method and electronic device for implementing page scrolling special effect
Colubri et al. Drawing in VR

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23791006

Country of ref document: EP

Kind code of ref document: A1