CN111640193A - Word processing method, word processing device, computer equipment and storage medium - Google Patents

Word processing method, word processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN111640193A
CN111640193A CN202010508207.5A CN202010508207A CN111640193A CN 111640193 A CN111640193 A CN 111640193A CN 202010508207 A CN202010508207 A CN 202010508207A CN 111640193 A CN111640193 A CN 111640193A
Authority
CN
China
Prior art keywords
virtual character
preset
character animation
target
entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010508207.5A
Other languages
Chinese (zh)
Inventor
潘思霁
李炳泽
武明飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Zhejiang Sensetime Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co Ltd filed Critical Zhejiang Shangtang Technology Development Co Ltd
Priority to CN202010508207.5A priority Critical patent/CN111640193A/en
Publication of CN111640193A publication Critical patent/CN111640193A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/14Travel agencies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides a word processing method, apparatus, computer device and storage medium, wherein the method comprises: acquiring a real scene picture of the augmented reality AR equipment; identifying entity characters displayed in the real scene picture and a display position of the entity characters based on the real scene picture; acquiring a virtual character animation of the virtual character described by the entity character; and presenting an AR effect of the combination of the real scene picture and the virtual character animation on the AR equipment, wherein the virtual character animation is displayed on the display area associated with the display position.

Description

Word processing method, word processing device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer vision technologies, and in particular, to a method and an apparatus for processing a word, a computer device, and a storage medium.
Background
In recent years, with the rapid development of the cultural tourism industry, more and more user groups visit various exhibitions, museums, scenic spots and the like. At present, for each display item in an exhibition, there are usually some text descriptions of the display item, but for some specific user groups, it is difficult to understand the text descriptions, for example, for the user groups of low age, the text descriptions may exceed their understanding scope, so that the displayed text descriptions cannot achieve the intended display purpose.
Disclosure of Invention
The embodiment of the disclosure at least provides a word processing method, a word processing device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a word processing method, including:
acquiring a real scene picture of the augmented reality AR equipment;
identifying entity characters displayed in the real scene picture and a display position of the entity characters based on the real scene picture;
acquiring a virtual character animation of the virtual character described by the entity character;
and presenting an AR effect of the combination of the real scene picture and the virtual character animation on the AR equipment, wherein the virtual character animation is displayed on the display area associated with the display position.
In the embodiment of the disclosure, by identifying the entity characters presented in the real scene picture and the display positions of the entity characters, the virtual character described by the entity characters and the corresponding virtual character animation can be determined, and further, the AR effect of combining the real scene picture and the virtual character animation presented in the AR equipment is realized, wherein the display area of the virtual character animation is associated with the display positions of the entity characters, so that the virtual character 'living' described by the entity characters and the AR visual special effect presented in the real scene can be brought to a user. This kind of word processing mode based on AR effect has broken through traditional two-dimentional and confined three-dimensional display restriction, can enrich the show form of literal content, has promoted user's visual experience sense, also can be convenient for the user to understand the literal introduction that the show project provided better to deepen the understanding to the content that the show project demonstrates.
In some embodiments of the disclosure, the obtaining of the virtual character animation of the virtual character described by the physical text includes:
matching the entity characters with a plurality of preset texts in a preset text library;
and acquiring the virtual character animation of the virtual character corresponding to the target preset text after detecting that the target preset text matched with the entity characters exists in the plurality of preset texts.
In the embodiment, various preset texts related to the display items can be preset in the preset text library, so that when the user holds the AR device to capture a real scene picture, whether entity characters matched with the preset texts exist in the real scene picture or not is identified, whether the corresponding AR effect is triggered or not can be determined to be displayed, interaction between the display process and the user is enhanced, and user experience is improved.
In some embodiments of the present disclosure, each preset text in the plurality of preset texts corresponds to a first identifier; the acquiring of the virtual character animation of the virtual character corresponding to the target preset text comprises:
determining a target first identification of the target preset text;
acquiring target virtual character animation information corresponding to the target first identifier from a preset virtual character animation library; the preset virtual character animation library comprises a plurality of types of virtual character animation information, and each type of virtual character animation information corresponds to one type of first identification;
determining the virtual character animation based on the target virtual character animation information.
In this embodiment, a binding relationship may be established between the virtual character animation information and the first identifier of the preset text, so that the first identifier of the preset text and the corresponding virtual character animation information may be configured according to the display requirement of the actual display item, for example, the first identifier, the corresponding preset text and the corresponding virtual character animation information may be configured according to different display subjects of different display items, and the corresponding virtual character animation information may be obtained subsequently based on a matching condition between the currently recognized entity text and the preset text.
In some embodiments of the present disclosure, the method further comprises: determining an area within a set range with the display position as a center as a display area associated with the display position; and/or acquiring preset position information pre-bound with the display position, and determining a display area associated with the display position based on the preset position information.
In this embodiment, the display area of the virtual character animation may be determined based on the display position of the physical text, for example, displaying the virtual character animation in a set range centered on the display position or in a preset display area can bring a visual effect of "living" of the virtual character described by the physical text to the user.
In some embodiments of the present disclosure, the identifying, based on the real scene picture, an entity text displayed in the real scene picture and a display position of the entity text includes:
detecting the area where the entity characters are located on the real scene picture by using a regional candidate network to obtain a target candidate frame of the area where the entity characters are located, and determining the display position of the entity characters based on the position information of the target candidate frame;
and utilizing a semantic recognition network to extract the characteristics of the image area corresponding to the target candidate box, and determining the entity characters in the target candidate box based on the extracted text characteristics.
In the embodiment, the target candidate box of the entity characters in the character area can be accurately and quickly identified by virtue of the pre-trained area candidate network, and the entity characters in the target candidate box are further identified by virtue of the semantic identification network, so that the character identification accuracy and the character identification efficiency can be improved.
In a second aspect, an embodiment of the present disclosure further provides a word processing apparatus, including:
the first acquisition module is used for acquiring a real scene picture of the augmented reality AR equipment;
the identification module is used for identifying the entity characters displayed in the real scene picture and the display positions of the entity characters based on the real scene picture;
the second acquisition module is used for acquiring the virtual character animation of the virtual character described by the entity character;
and the presentation module is used for presenting an AR effect combining the real scene picture and the virtual character animation on the AR equipment, and the virtual character animation is presented on the presentation area associated with the presentation position.
In some embodiments of the disclosure, the second obtaining module, when obtaining the virtual character animation of the virtual character described in the entity text, is specifically configured to:
matching the entity characters with a plurality of preset texts in a preset text library;
and acquiring the virtual character animation of the virtual character corresponding to the target preset text after detecting that the target preset text matched with the entity characters exists in the plurality of preset texts.
In some embodiments of the present disclosure, each preset text in the plurality of preset texts corresponds to a first identifier;
the second obtaining module, when obtaining the virtual character animation of the virtual character corresponding to the target preset text, is specifically configured to:
determining a target first identification of the target preset text;
acquiring target virtual character animation information corresponding to the target first identifier from a preset virtual character animation library; the preset virtual character animation library comprises a plurality of types of virtual character animation information, and each type of virtual character animation information corresponds to one type of first identification;
determining the virtual character animation based on the target virtual character animation information.
In some embodiments of the present disclosure, the apparatus further comprises:
the display area determining module is used for determining an area in a set range with the display position as the center as a display area associated with the display position; and/or acquiring preset position information pre-bound with the display position, and determining a display area associated with the display position based on the preset position information.
In some embodiments of the present disclosure, the identification module, when identifying the entity words displayed in the real scene picture and the display positions of the entity words based on the real scene picture, is specifically configured to:
detecting the area where the entity characters are located on the real scene picture by using a regional candidate network to obtain a target candidate frame of the area where the entity characters are located, and determining the display position of the entity characters based on the position information of the target candidate frame;
and utilizing a semantic recognition network to extract the characteristics of the image area corresponding to the target candidate box, and determining the entity characters in the target candidate box based on the extracted text characteristics.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
According to the method, the device, the computer equipment and the storage medium provided by the embodiment of the disclosure, the virtual character described by the entity character and the corresponding virtual character animation can be determined by identifying the entity character presented in the real scene picture and the display position of the entity character, so that the AR effect of combining the real scene picture and the virtual character animation presented in the AR equipment is realized, wherein the display area of the virtual character animation is associated with the display position of the entity character, and the AR visual special effect of the virtual character 'becoming alive' described by the entity character and appearing in the real scene can be brought to a user. This kind of word processing mode based on AR effect has broken through traditional two-dimentional and confined three-dimensional display restriction, can enrich the show form of literal content, has promoted user's visual experience sense, also can be convenient for the user to understand the literal introduction that the show project provided better to deepen the understanding to the content that the show project demonstrates.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a method of word processing provided by an embodiment of the present disclosure;
FIG. 2 is a flow diagram illustrating an example of a method of word processing provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a word processing device provided by an embodiment of the present disclosure;
fig. 4 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Augmented Reality (AR) technology superimposes entity information (visual information, sound, touch, etc.) on the real world after simulation, so that a real environment and a virtual object are presented on the same screen or space in real time.
The embodiment of the present disclosure may be applied to any computer device (such as a mobile phone, a tablet, AR glasses, etc.) or a server supporting AR technology, or a combination thereof, and in a case that the present disclosure is applied to a server, the server may be connected to other computer devices having a communication function and a camera, where the connection mode may be a wired connection or a Wireless connection, and the Wireless connection may be, for example, a bluetooth connection, a Wireless broadband (WIFI) connection, etc.
A detailed description will be given below of a word processing method according to an embodiment of the present disclosure.
Referring to fig. 1, a flow chart of a word processing method provided in the embodiment of the present disclosure is schematically illustrated, which includes the following steps:
s101, acquiring a real scene picture of the AR device.
In the embodiment of the disclosure, the presentation method can be applied to an AR device or a server. When the presentation method is applied to the AR device, an image acquisition device (such as a camera) in the AR device may be used to acquire a real scene picture in a real scene, and the real scene picture of a single frame may be acquired by shooting an image, or the real scene picture of consecutive frames may be acquired by shooting a video. When the presentation method is applied to a server, the AR device or other computer devices with image acquisition functions can send acquired real scene pictures of a single frame or continuous multiple frames to the server. The present disclosure is not limited to the specific manner of image acquisition and the number of frames of the acquired image.
For example, the user may be placed in a certain exhibition hall, and view the display content of each display item in the exhibition hall with the AR effect displayed after superimposing the virtual animation by collecting the pictures of each display item in the exhibition hall in real time.
S102, identifying the entity characters displayed in the real scene picture and the display positions of the entity characters based on the real scene picture.
The real scene picture in the embodiment of the present disclosure refers to an image of a real scene captured by an AR device or other computer device. The real scene picture can include at least one entity object in the real scene. The entity object may include, but is not limited to, entity words, for example, for a real scene picture in an exhibition hall, the entity object included in the real scene picture may be a word introduction corresponding to at least one exhibition item in the exhibition hall, and the like.
In the embodiment of the present disclosure, the carrier and the display form of the entity file in the real scene are not limited by taking the entity object as the entity text. The carrier for carrying the physical text may be paper, for example, on which the text description of the displayed item is written, or may be an electronic display screen, for example, on which the text description of the displayed item is displayed, or may be directly written in a wall surface or on some plaque.
For example, the text introduction presented in the real scene may be a name of a presentation item, may also be a specific presentation content of the presentation item, and the text introduction may be a short keyword, a long sentence, or a paragraph, and the disclosure is not limited thereto.
For example, a textual introduction presented in a real scene may be used to describe a virtual character. The avatar, for example, may be an avatar associated with the subject matter of the presentation item. For example, if the exhibition item is a "western" theme exhibition hall, the entity words displayed in the exhibition hall can be used to describe the role in "western", for example, the entity words are "grand Wu".
Under the condition that the presentation method is applied to the AR equipment, the AR equipment can locally complete the character recognition and positioning process, the AR equipment can also upload the real scene picture to a cloud server, then the server completes the character recognition and positioning process, and receives the character recognition and positioning results of the server. In the case that the presentation method is applied to a server, the server can complete the processes of character recognition and positioning.
In the embodiment of the present disclosure, a text region in a real scene picture may be identified and located by means of an Optical Character Recognition (OCR) technology.
In some embodiments, the area candidate network may be used to detect the area where the entity text is located in the real scene picture, so as to obtain a target candidate box in the area where the entity text is located.
In one aspect, the display position of the entity text may be determined based on the position information of the target candidate box. The position information is, for example, coordinate information of a vertex of a boundary line of the target candidate frame, and the display position of the entity text may be determined based on the coordinate information of each vertex of the target candidate frame. For example, the position represented by the coordinate information of each vertex may be directly determined as the display position of the entity text, or the central position between the positions represented by the coordinate information of each vertex may be determined as the display position of the entity text.
On the other hand, a semantic recognition network can be further utilized to extract the features of the image area corresponding to the target candidate box, and the entity characters in the target candidate box are determined based on the extracted text features. For example, the semantic recognition Network may employ a Convolutional Recurrent Neural Network (CRNN), may recognize a context feature of a text sequence in an image region corresponding to the target candidate box, perform classification based on the context feature, and determine a final detection result based on a classification result. In addition, after the character sequence is obtained, the semantic splitting can be further carried out on the character sequence to identify keywords or keywords in the character sequence, and the keywords or keywords are used as entity characters in the target candidate box.
The regional candidate network and the semantic recognition network can be obtained by training preset image samples marked with reference labels. For example, the area candidate network and the semantic recognition network may be trained separately, for example, after the area candidate network is trained by using the image sample labeled with the reference candidate frame, the semantic recognition network is trained by using the image sample labeled with the text label. Alternatively, the regional candidate network and the semantic recognition network may be jointly trained, which is not limited by the present disclosure.
In addition, before the entity characters in the real scene picture are identified and positioned, the real scene picture can be preprocessed to obtain preprocessed images, and then the preprocessed images are input into a pre-trained regional candidate network and a semantic identification network to carry out the positioning and identification processing of the entity characters. Wherein, the pretreatment may include but is not limited to at least one of the following treatments: graying processing, binarization processing, inclination correction processing, normalization processing and image smoothing processing.
In addition, the extracted character features can also determine matched characters by utilizing a feature matching mode. Illustratively, the extracted character features can be matched with character features of preset characters in an existing feature library, and then characters with the highest similarity to the characters to be recognized are found from the existing feature library, so that entity characters formed by at least one character are obtained. There are many methods for feature matching, such as euclidean space alignment, relaxed alignment, dynamic program alignment, and so on.
In the embodiment provided by the disclosure, the pre-trained regional candidate network can be used for accurately and quickly identifying the target candidate box of the entity characters in the character region, and the entity characters in the target candidate box are further identified through the semantic identification network, so that the character identification accuracy and the character identification efficiency can be improved. And before the picture of the real scene is input into the character recognition model for recognition, a series of preprocessing operations can be carried out so as to remove unnecessary interference noise in the picture and further improve the character recognition accuracy and the character recognition efficiency.
S103, acquiring the virtual character animation of the virtual character described by the entity characters.
For example, the virtual character described by the entity text can be determined by recognizing the semantic meaning of the entity text, and the corresponding virtual character animation can be further obtained.
When acquiring the virtual character animation, the virtual character animation may be acquired directly, or the virtual character animation information may be acquired first, and then the virtual character animation may be generated through a series of processes.
Illustratively, the virtual character animation may be a virtual character animation video rendered via a rendering tool. The virtual character animation information may be rendering parameters required for generating a virtual character animation video, or may also be two-dimensional or three-dimensional model parameters of a virtual character appearing in the virtual character animation under various gestures, and a virtual character animation effect when the virtual character presents different gestures can be rendered by using the two-dimensional or three-dimensional model parameters. For example, the model parameters of the virtual character in the virtual character animation information may include the facial key points and the limb key points of the virtual character, and the like. The present disclosure is not limited to a specific rendering manner.
The content presented by the virtual character animation is not limited in the embodiment of the present disclosure. Illustratively, animation effects of different poses of the virtual character may be presented. The virtual character may be a two-dimensional virtual character or a three-dimensional virtual character, which is not limited in this disclosure. Illustratively, following the example in step 102, assuming that the exhibition item is an exhibition hall with the theme of "western script" and the recognized entity text in the real scene picture is "grand Wu in western script", the virtual character described by the entity text is "grand Wu", and the animation of the obtained virtual character may be an animation video formed by multiple frames of pictures related to the "grand Wu" character.
In the case that the presentation method is applied to the AR device, the virtual character animation may be acquired locally or in the cloud, and the virtual character animation or virtual character animation information for generating the virtual character animation may be stored locally or in the cloud accordingly. In the case where the presentation method is applied to a server, the server can directly find the corresponding virtual character animation or virtual character animation information from a virtual character animation library in which the virtual character animation or virtual character animation information is stored.
And S104, presenting an AR effect of the combination of the real scene picture and the virtual character animation in the AR device, wherein the virtual character animation is displayed on the display area associated with the display position.
For example, the presentation of the AR effect in the AR device may be understood as presenting a virtual character animation merged into a real scene in the AR device, and may be directly rendering the presentation content of the virtual character and presenting the rendering content in a real scene picture, or presenting a merged display picture after merging the presentation content of the virtual character with the real scene picture. The specific selection of which presentation manner depends on the device type of the AR device and the adopted picture presentation technology, for example, generally, since a real scene (not an imaged real scene picture) can be directly seen from the AR glasses, the AR glasses can adopt a presentation manner of directly rendering the presentation picture of the virtual character; for mobile terminal devices such as mobile phones and tablet computers, since the pictures formed by imaging the real scene are displayed in the mobile terminal devices, the AR effect can be displayed in a manner of fusing the real scene pictures and the contents of the virtual character.
Illustratively, the virtual character animation may be presented on a presentation area associated with the presentation location. There are various ways of specifically displaying the regions, and the following are exemplary possible embodiments:
in the first embodiment, a region within a set range centered on the presentation position is determined as the presentation region associated with the presentation position.
The setting range can be a setting range on a two-dimensional plane where the real scene picture is located, and can also be a setting range on a three-dimensional space where the real scene picture is located, correspondingly, the display position can be a coordinate point on a two-dimensional plane coordinate system, and can also be a coordinate point on a three-dimensional space coordinate system, and the determined display area can be a two-dimensional plane area and a three-dimensional space area. In this way, a two-dimensional virtual character animation may be displayed in the display area, or a three-dimensional virtual character animation may be displayed in the display area. For example, a boundary size of the set range may be preset, and then the set range may be determined as a display area for displaying the avatar animation with the display position as a center point and the set boundary size as a side length or a radius.
And in the second implementation mode, preset position information pre-bound with the display position is obtained, and the display area associated with the display position is determined based on the preset position information.
In this embodiment, the display rule may be preset, for example, preset position information is preset for the entity text displayed in each display item, and the corresponding display area may be obtained through the preset position information. For example, if the preset position information is located right in front of the display position and the distance from the display position is a set length, the plane where the preset position information is located may be used as the display area associated with the display position. Or, the preset position information is directly in front of the display position, and the distance between the preset position information and the display position is within a set length, then, the three-dimensional space meeting the preset position information can be used as the display area associated with the display position. Of course, the specific relative position relationship with the display position may not be limited to the right front, for example, the right upper or the right lower, and the like, and the disclosure does not limit this.
In the embodiment of the disclosure, by identifying the entity characters presented in the real scene picture and the display positions of the entity characters, the virtual character described by the entity characters and the corresponding virtual character animation can be determined, and further, the AR effect of combining the real scene picture and the virtual character animation presented in the AR equipment is realized, wherein the display area of the virtual character animation is associated with the display positions of the entity characters, so that the virtual character 'living' described by the entity characters and the AR visual special effect presented in the real scene can be brought to a user. This kind of word processing mode based on AR effect has broken through traditional two-dimentional and confined three-dimensional display restriction, can enrich the show form of literal content, has promoted user's visual experience sense, also can be convenient for the user to understand the literal introduction that the show project provided better to deepen the understanding to the content that the show project demonstrates.
Based on the content of the foregoing embodiments, an exemplary description of a word processing method is further provided in the embodiments of the present disclosure, and as shown in fig. 2, a specific execution flowchart of the exemplary description is provided, which includes the following steps:
s201, acquiring a real scene picture of the AR device.
S202, identifying the entity characters displayed in the real scene picture and the display positions of the entity characters based on the real scene picture.
And S203, matching the entity characters with a plurality of preset texts in a preset text library.
In the embodiment of the present disclosure, multiple preset texts may be preset in the preset text library. For example, preset texts with different themes may be set according to the display themes of different display items, respectively, for representing virtual characters under different themes.
For example, matching between the entity words and the preset texts in the preset text library may be performed by calculating the similarity between the entity words and the preset texts, and determining the preset texts with the highest similarity to the entity words in the preset text library as the target preset texts, or after determining the preset texts with the highest similarity, determining the preset texts with the highest similarity as the target preset texts if the corresponding similarity value is greater than a set threshold value. And then the virtual character represented by the target preset text is used as the virtual character described by the entity characters.
Through presetting various preset texts related to display items in the preset text library, when a user holds AR equipment to capture a real scene picture, if the real characters appear in the real scene picture and the real characters are matched with the target preset texts in the preset text library, a subsequent character processing flow can be triggered, interaction between the display process and the user is enhanced, and user experience is improved.
S204, after the target preset text matched with the entity characters exists in the multiple preset texts, obtaining the virtual character animation of the virtual character corresponding to the target preset text.
Specifically, each preset text in the multiple preset texts corresponds to one first identifier. When obtaining the virtual character animation of the virtual character corresponding to the target preset text, the target first identifier of the target preset text may be determined first, and then the target virtual character animation information corresponding to the target first identifier may be obtained from a preset virtual character animation library. Further, virtual character animation may be determined based on the target virtual character animation information.
The preset virtual character animation library comprises a plurality of kinds of virtual character animation information, and each kind of virtual character animation information corresponds to a first identification.
The virtual character animation information may be a plurality of rendered video frame images constituting the virtual character animation, or may be various rendering parameters required for rendering the virtual character animation, which is not limited in this disclosure. When the virtual character animation information is the rendering parameter, the virtual character animation is determined based on the target virtual character animation information, and the virtual character animation may be generated after rendering is performed by using the rendering parameter in the target virtual character animation information.
In the embodiment of the disclosure, each preset text may have a corresponding first identifier for identifying a virtual character represented by the preset text, where an expression form of the first identifier may be a character, a number, or any character such as a letter. The target first identifier corresponds to an index number of the target preset text, and the target virtual character animation information corresponding to the index number can be further searched from a preset virtual character animation library through the index number.
The preset virtual character animation library comprises a plurality of kinds of virtual character animation information, and each kind of virtual character animation information corresponds to one kind of first identification. The first identifier may also be understood as a first identifier of a preset text in a preset text library, i.e. an index number. Through the first identifier, the virtual character animation information can be associated with the preset text in the preset text library so as to find the target virtual character animation information corresponding to the target first identifier.
For example, the target first identifier and the target virtual character animation information and the target preset text may belong to the same presentation subject. In specific implementation, the associated virtual character animation information and the preset text can belong to the same display theme, wherein the preset text can be associated with various virtual character animation information, that is, the preset text conforming to a certain display theme can correspond to various virtual character animation information conforming to the display theme, various virtual character animations of the various virtual character animation information can be sequentially presented according to a sequence, can also be randomly presented, and the presentation modes of the virtual character animations corresponding to the various virtual character animation information can be set through preset presentation rules.
In this embodiment, a binding relationship may be established between the virtual character animation information and the first identifier of the preset text, so that the first identifier of the preset text and the corresponding virtual character animation information may be configured according to the display requirement of the actual display item, for example, the first identifier, the corresponding preset text and the corresponding virtual character animation information may be configured according to different display subjects of different display items, and the corresponding virtual character animation information may be obtained subsequently based on a matching condition between the currently recognized entity text and the preset text. Therefore, the word processing mode can be diversified, and the user experience degree is further improved.
S205, presenting an AR effect of the combination of the real scene picture and the virtual character animation in the AR device, wherein the virtual character animation is displayed on the display area associated with the display position.
In the above exemplary description, reference may be made to the explanation of the related features in the previous embodiments, and the description of the related features in the previous embodiments is not repeated in the present disclosure.
The following is an illustration of a specific application scenario of the disclosed embodiments.
Firstly, a preset text library can be established at the cloud or locally, a plurality of preset texts are recorded in the preset text library, the preset texts are keywords for example, and each preset text can also correspond to a first identifier.
Thereafter, the scene is scanned. And scanning the real scene needing the AR effect by using mobile portable equipment such as a mobile phone with a camera, and sending video frame data captured by the camera to the cloud. After receiving the video frame data, the server identifies entity characters in character areas appearing in the video frame data, matches the entity characters with preset texts in a preset text library established before, and simultaneously returns first identifications of the preset texts after matching is successful to the client.
Further, after the client takes the first identifier of the preset text, the client may download or locally read the virtual character animation information corresponding to the first identifier from the cloud, and display the AR effect superimposed with the corresponding virtual character animation by using the virtual character animation information.
For example, the entity character appearing in the real scene is 'Hanwudi', and the AR effect of the virtual animation overlapped with the Hanwudi appearing in the real scene can be displayed through a mobile phone screen after the mobile phone is used for scanning.
The method has the advantages that semantic recognition and keyword extraction are carried out on the text, the virtual character to be described in the real scene is determined, then the virtual character animation is determined according to the virtual character, the AR effect of the real scene picture overlapped with the virtual character animation is displayed in the mobile portable equipment, the limitation of traditional two-dimensional and closed three-dimensional display is broken through, the AR visual effect of the virtual character for performing character description in the real scene is brought to a user, the user experience is greatly improved, meanwhile, interestingness is increased for introduction of the historical character, and the method is beneficial for the user to know new knowledge for learning of the historical character.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, a word processing device corresponding to the word processing method is also provided in the embodiments of the present disclosure, and because the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the word processing method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 3, which is a schematic diagram of a word processing apparatus according to an embodiment of the present disclosure, the apparatus includes: the first obtaining module 31, the identifying module 32, the second obtaining module 33, and the display module 34 may further include a display area determining module 35 in some embodiments.
A first obtaining module 31, configured to obtain a real scene picture of an augmented reality AR device;
an identifying module 32, configured to identify, based on the real scene picture, an entity character displayed in the real scene picture and a display position of the entity character;
a second obtaining module 33, configured to obtain a virtual character animation of the virtual character described in the entity text;
and a display module 34, configured to present, on the AR device, an AR effect in which the real scene picture is combined with the virtual character animation, where the virtual character animation is displayed on the display area associated with the display position.
In some embodiments of the disclosure, the second obtaining module 33, when obtaining the virtual character animation of the virtual character described in the physical text, is specifically configured to:
matching the entity characters with a plurality of preset texts in a preset text library;
and acquiring the virtual character animation of the virtual character corresponding to the target preset text after detecting that the target preset text matched with the entity characters exists in the plurality of preset texts.
In some embodiments of the present disclosure, each preset text in the plurality of preset texts corresponds to a first identifier;
the second obtaining module 33, when obtaining the virtual character animation of the virtual character corresponding to the target preset text, is specifically configured to:
determining a target first identification of the target preset text;
acquiring target virtual character animation information corresponding to the target first identifier from a preset virtual character animation library; the preset virtual character animation library comprises a plurality of types of virtual character animation information, and each type of virtual character animation information corresponds to one type of first identification;
determining the virtual character animation based on the target virtual character animation information.
In some embodiments of the present disclosure, the apparatus further comprises:
a display area determining module 35, configured to determine an area within a set range with the display position as a center as a display area associated with the display position; and/or acquiring preset position information pre-bound with the display position, and determining a display area associated with the display position based on the preset position information.
In some embodiments of the present disclosure, when identifying, based on the real scene image, the entity words displayed in the real scene image and the display positions of the entity words, the identifying module 32 is specifically configured to:
detecting the area where the entity characters are located on the real scene picture by using a regional candidate network to obtain a target candidate frame of the area where the entity characters are located, and determining the display position of the entity characters based on the position information of the target candidate frame;
and utilizing a semantic recognition network to extract the characteristics of the image area corresponding to the target candidate box, and determining the entity characters in the target candidate box based on the extracted text characteristics.
In some embodiments, the functions of the apparatus provided in the embodiments of the present disclosure or the included templates may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, no further description is provided here.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 4, a schematic structural diagram of a computer device provided in an embodiment of the present disclosure includes: a processor 11 and a memory 12; the memory 12 stores machine-readable instructions executable by the processor 11, which when executed by the computer device are executed by the processor 11 to perform the steps of:
acquiring a real scene picture of the augmented reality AR equipment; identifying entity characters displayed in the real scene picture and a display position of the entity characters based on the real scene picture; acquiring a virtual character animation of the virtual character described by the entity character; and presenting an AR effect of the combination of the real scene picture and the virtual character animation on the AR equipment, wherein the virtual character animation is displayed on the display area associated with the display position.
The specific execution process of the instruction may refer to the steps of the word processing method described in the embodiments of the present disclosure, and details are not described here.
In addition, the embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the word processing method in the above method embodiment.
The computer program product of the word processing method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the augmented reality data presentation method described in the above method embodiments, which may be referred to in the above method embodiments specifically, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above are only specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present disclosure, and shall be covered by the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of word processing, comprising:
acquiring a real scene picture of the augmented reality AR equipment;
identifying entity characters displayed in the real scene picture and a display position of the entity characters based on the real scene picture;
acquiring a virtual character animation of the virtual character described by the entity character;
and presenting an AR effect of the combination of the real scene picture and the virtual character animation on the AR equipment, wherein the virtual character animation is displayed on the display area associated with the display position.
2. The method of claim 1, further comprising:
determining an area within a set range with the display position as a center as a display area associated with the display position; and/or the presence of a gas in the gas,
acquiring preset position information pre-bound with the display position, and determining a display area associated with the display position based on the preset position information.
3. The method of claim 1 or 2, wherein the obtaining of the avatar animation of the avatar described by the physical text comprises:
matching the entity characters with a plurality of preset texts in a preset text library;
and acquiring the virtual character animation of the virtual character corresponding to the target preset text after detecting that the target preset text matched with the entity characters exists in the plurality of preset texts.
4. The method according to claim 3, wherein each preset text in the plurality of preset texts corresponds to a first identifier;
the acquiring of the virtual character animation of the virtual character corresponding to the target preset text comprises:
determining a target first identification of the target preset text;
acquiring target virtual character animation information corresponding to the target first identifier from a preset virtual character animation library; the preset virtual character animation library comprises a plurality of types of virtual character animation information, and each type of virtual character animation information corresponds to one type of first identification;
determining the virtual character animation based on the target virtual character animation information.
5. The method according to any one of claims 1 to 4, wherein the identifying, based on the real scene picture, the entity text displayed in the real scene picture and the display position of the entity text comprises:
detecting the area where the entity characters are located on the real scene picture by using a regional candidate network to obtain a target candidate frame of the area where the entity characters are located, and determining the display position of the entity characters based on the position information of the target candidate frame;
and utilizing a semantic recognition network to extract the characteristics of the image area corresponding to the target candidate box, and determining the entity characters in the target candidate box based on the extracted text characteristics.
6. A word processing device, comprising:
the first acquisition module is used for acquiring a real scene picture of the augmented reality AR equipment;
the identification module is used for identifying the entity characters displayed in the real scene picture and the display positions of the entity characters based on the real scene picture;
the second acquisition module is used for acquiring the virtual character animation of the virtual character described by the entity character;
and the presentation module is used for presenting an AR effect combining the real scene picture and the virtual character animation on the AR equipment, and the virtual character animation is presented on the presentation area associated with the presentation position.
7. The apparatus of claim 6, wherein the second obtaining module, when obtaining the avatar animation of the avatar described in the physical text, is specifically configured to:
matching the entity characters with a plurality of preset texts in a preset text library;
and acquiring the virtual character animation of the virtual character corresponding to the target preset text after detecting that the target preset text matched with the entity characters exists in the plurality of preset texts.
8. The apparatus according to claim 7, wherein each of the plurality of predetermined texts corresponds to a first identifier;
the second obtaining module, when obtaining the virtual character animation of the virtual character corresponding to the target preset text, is specifically configured to:
determining a target first identification of the target preset text;
acquiring target virtual character animation information corresponding to the target first identifier from a preset virtual character animation library; the preset virtual character animation library comprises a plurality of types of virtual character animation information, and each type of virtual character animation information corresponds to one type of first identification;
determining the virtual character animation based on the target virtual character animation information.
9. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor to execute the machine-readable instructions stored in the memory, the processor to perform the steps of the word processing method as claimed in any one of claims 1 to 5 when the machine-readable instructions are executed by the processor.
10. A computer-readable storage medium, having stored thereon a computer program for, when executed by a computer device, performing the steps of the word processing method as claimed in any one of claims 1 to 5.
CN202010508207.5A 2020-06-05 2020-06-05 Word processing method, word processing device, computer equipment and storage medium Pending CN111640193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010508207.5A CN111640193A (en) 2020-06-05 2020-06-05 Word processing method, word processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010508207.5A CN111640193A (en) 2020-06-05 2020-06-05 Word processing method, word processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111640193A true CN111640193A (en) 2020-09-08

Family

ID=72330703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010508207.5A Pending CN111640193A (en) 2020-06-05 2020-06-05 Word processing method, word processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111640193A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053450A (en) * 2020-09-10 2020-12-08 脸萌有限公司 Character display method and device, electronic equipment and storage medium
WO2022055421A1 (en) * 2020-09-09 2022-03-17 脸萌有限公司 Augmented reality-based display method, device, and storage medium
CN114330353A (en) * 2022-01-06 2022-04-12 腾讯科技(深圳)有限公司 Entity identification method, device, equipment, medium and program product of virtual scene
WO2022132033A1 (en) * 2020-12-18 2022-06-23 脸萌有限公司 Display method and apparatus based on augmented reality, and device and storage medium
CN115619912A (en) * 2022-10-27 2023-01-17 深圳市诸葛瓜科技有限公司 Cartoon character display system and method based on virtual reality technology
RU2801917C1 (en) * 2020-09-09 2023-08-18 Бейджин Цзытяо Нетворк Текнолоджи Ко., Лтд. Method and device for displaying images based on augmented reality and medium for storing information

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105022487A (en) * 2015-07-20 2015-11-04 北京易讯理想科技有限公司 Reading method and apparatus based on augmented reality
CN105807917A (en) * 2016-02-29 2016-07-27 广东小天才科技有限公司 Method and device for assisting user in learning characters
WO2016167691A2 (en) * 2015-04-16 2016-10-20 Общество с ограниченной ответственностью "Лаборатория 24" Teaching method and means for the implementation thereof
CN109766801A (en) * 2018-12-28 2019-05-17 深圳市掌网科技股份有限公司 Aid reading method, apparatus, readable storage medium storing program for executing and mixed reality equipment
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016167691A2 (en) * 2015-04-16 2016-10-20 Общество с ограниченной ответственностью "Лаборатория 24" Teaching method and means for the implementation thereof
CN105022487A (en) * 2015-07-20 2015-11-04 北京易讯理想科技有限公司 Reading method and apparatus based on augmented reality
CN105807917A (en) * 2016-02-29 2016-07-27 广东小天才科技有限公司 Method and device for assisting user in learning characters
CN109840947A (en) * 2017-11-28 2019-06-04 广州腾讯科技有限公司 Implementation method, device, equipment and the storage medium of augmented reality scene
CN109766801A (en) * 2018-12-28 2019-05-17 深圳市掌网科技股份有限公司 Aid reading method, apparatus, readable storage medium storing program for executing and mixed reality equipment
CN110716645A (en) * 2019-10-15 2020-01-21 北京市商汤科技开发有限公司 Augmented reality data presentation method and device, electronic equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022055421A1 (en) * 2020-09-09 2022-03-17 脸萌有限公司 Augmented reality-based display method, device, and storage medium
US11587280B2 (en) 2020-09-09 2023-02-21 Beijing Zitiao Network Technology Co., Ltd. Augmented reality-based display method and device, and storage medium
RU2801917C1 (en) * 2020-09-09 2023-08-18 Бейджин Цзытяо Нетворк Текнолоджи Ко., Лтд. Method and device for displaying images based on augmented reality and medium for storing information
CN112053450A (en) * 2020-09-10 2020-12-08 脸萌有限公司 Character display method and device, electronic equipment and storage medium
US11836437B2 (en) 2020-09-10 2023-12-05 Lemon Inc. Character display method and apparatus, electronic device, and storage medium
WO2022132033A1 (en) * 2020-12-18 2022-06-23 脸萌有限公司 Display method and apparatus based on augmented reality, and device and storage medium
CN114330353A (en) * 2022-01-06 2022-04-12 腾讯科技(深圳)有限公司 Entity identification method, device, equipment, medium and program product of virtual scene
CN114330353B (en) * 2022-01-06 2023-06-13 腾讯科技(深圳)有限公司 Entity identification method, device, equipment, medium and program product of virtual scene
CN115619912A (en) * 2022-10-27 2023-01-17 深圳市诸葛瓜科技有限公司 Cartoon character display system and method based on virtual reality technology
CN115619912B (en) * 2022-10-27 2023-06-13 深圳市诸葛瓜科技有限公司 Cartoon figure display system and method based on virtual reality technology

Similar Documents

Publication Publication Date Title
CN111640193A (en) Word processing method, word processing device, computer equipment and storage medium
US10032072B1 (en) Text recognition and localization with deep learning
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
CN108073910B (en) Method and device for generating human face features
US9098888B1 (en) Collaborative text detection and recognition
US9256795B1 (en) Text entity recognition
CN109034069B (en) Method and apparatus for generating information
CN111191067A (en) Picture book identification method, terminal device and computer readable storage medium
US11681409B2 (en) Systems and methods for augmented or mixed reality writing
WO2014024197A1 (en) A method and system for linking printed objects with electronic content
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
KR20200020305A (en) Method and Apparatus for character recognition
CN112991555B (en) Data display method, device, equipment and storage medium
CN112150349A (en) Image processing method and device, computer equipment and storage medium
CN111651049B (en) Interaction method, device, computer equipment and storage medium
CN111638792A (en) AR effect presentation method and device, computer equipment and storage medium
JP7027524B2 (en) Processing of visual input
CN112328088A (en) Image presenting method and device
Beglov Object information based on marker recognition
CN111582281B (en) Picture display optimization method and device, electronic equipment and storage medium
CN111986332A (en) Method and device for displaying message board, electronic equipment and storage medium
US10528852B2 (en) Information processing apparatus, method and computer program product
Li et al. A platform for creating Smartphone apps to enhance Chinese learning using augmented reality
CN111881338A (en) Printed matter content retrieval method based on social software light application applet
CN112070092A (en) Verification code parameter acquisition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908