CN108986191B - Character action generation method and device and terminal equipment - Google Patents

Character action generation method and device and terminal equipment Download PDF

Info

Publication number
CN108986191B
CN108986191B CN201810720342.9A CN201810720342A CN108986191B CN 108986191 B CN108986191 B CN 108986191B CN 201810720342 A CN201810720342 A CN 201810720342A CN 108986191 B CN108986191 B CN 108986191B
Authority
CN
China
Prior art keywords
object model
person
keywords related
keyword
keywords
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810720342.9A
Other languages
Chinese (zh)
Other versions
CN108986191A (en
Inventor
乔慧
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201810720342.9A priority Critical patent/CN108986191B/en
Publication of CN108986191A publication Critical patent/CN108986191A/en
Application granted granted Critical
Publication of CN108986191B publication Critical patent/CN108986191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a method, a device and terminal equipment for generating character actions, wherein the method is used for virtual reality and/or augmented reality, and comprises the following steps: collecting expression information input by a user; wherein, the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person; extracting keywords related to the person from the expression information; obtaining an object model corresponding to the keyword from a pre-stored object model library; and processing the object model according to the expression information to obtain limb action information of the object model. According to the character action generation method, device and terminal equipment provided by the embodiment of the invention, on the basis of realizing automatic construction of the three-dimensional scene, the acquisition efficiency of character limb action change in the three-dimensional scene is improved.

Description

Character action generation method and device and terminal equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for generating a character action, and a terminal device.
Background
With the continuous development of virtual reality technology and/or augmented reality, more and more three-dimensional models are used for sharing applications, and three-dimensional scenes are built through the three-dimensional models of the sharing applications, so that the three-dimensional scenes are widely applied to various fields, more visual enjoyment can be provided for users to a great extent, and the experience of the users is improved.
In the prior art, when constructing a three-dimensional scene through an existing three-dimensional model, a professional needs to acquire three-dimensional models of all objects in the scene, and then combine the three-dimensional models of the objects in a manual mode, so as to generate a corresponding three-dimensional scene. For example, when a character model in the three-dimensional scene changes in a series of limb movements, a professional needs to manually acquire each limb movement of the character model first and manually combine the character models corresponding to the limb movements to obtain a complete set of limb movement changes of the character model. Therefore, the existing mode is adopted, so that the acquisition efficiency of the character limb motion change in the three-dimensional scene is low.
Disclosure of Invention
The invention provides a method, a device and terminal equipment for generating character actions, which improve the acquisition efficiency of character limb action changes in a three-dimensional scene on the basis of realizing automatic construction of the three-dimensional scene.
In a first aspect, an embodiment of the present invention provides a method for generating a character action, where the method is used for virtual reality and/or augmented reality, and the method includes:
collecting expression information input by a user; wherein the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person;
extracting the keywords related to the person from the expression information;
obtaining an object model corresponding to the keyword from a pre-stored object model library;
and processing the object model according to the expression information to obtain limb action information of the object model.
In one possible implementation manner, before the object model corresponding to the keyword is acquired in the pre-stored object model library, the method further includes:
collecting keywords related to the person, and collecting object models corresponding to the keywords;
and establishing an object model library, wherein the object model library comprises association relations between keywords and object models.
In one possible implementation manner, the collecting the object model corresponding to the keyword includes:
collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword;
and training and clustering the plurality of different object models as training samples to obtain the object model corresponding to the keyword.
In one possible implementation manner, the collecting the expression information input by the user includes:
collecting text information input by a user;
correspondingly, the extracting the keywords related to the person from the expression information comprises the following steps:
word segmentation processing is carried out on the text information according to the semantic model, so that a phrase is obtained;
extracting the keywords related to the people from the phrase.
In one possible implementation manner, the collecting the expression information input by the user includes:
collecting voice information input by a user;
correspondingly, the extracting the keywords related to the person from the expression information comprises the following steps:
performing voice recognition on the voice information to obtain text information;
word segmentation processing is carried out on the text information according to the semantic model, so that a phrase is obtained;
and extracting keywords related to the people from the phrase.
In a second aspect, an embodiment of the present invention provides a device for generating a character action, where the device is used for virtual reality and/or augmented reality, and the device includes:
the acquisition unit is used for acquiring the expression information input by the user; wherein the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person;
an acquisition unit configured to extract the keyword related to the person from the expression information;
the obtaining unit is further used for obtaining an object model corresponding to the keyword from a pre-stored object model library;
and the processing unit is used for processing the object model according to the expression information to obtain limb action information of the object model.
In a possible implementation manner, the device for acquiring human body actions further comprises an establishing unit;
the collecting unit is further used for collecting keywords related to the person and collecting object models corresponding to the keywords;
the establishing unit is used for establishing an object model library, and the object model library contains the association relation between the keywords and the object model.
In one possible implementation manner, the collecting unit is specifically configured to collect a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering the plurality of different object models as training samples to obtain the object model corresponding to the keyword.
In one possible implementation manner, the collecting unit is specifically configured to collect text information input by a user;
correspondingly, the acquisition unit is specifically used for performing word segmentation processing on the text information according to the semantic model to obtain a phrase; and extracting the keywords related to the people from the phrase.
In one possible implementation manner, the collecting unit is specifically configured to collect voice information input by a user;
correspondingly, the acquisition unit is specifically used for carrying out voice recognition on the voice information to obtain text information; word segmentation processing is carried out on the text information according to the semantic model, so that a phrase is obtained; and extracting keywords related to the person from the phrase.
In a third aspect, embodiments of the present invention also provide a terminal device, which may include a processor and a memory, wherein,
the memory is used for storing program instructions;
the processor is configured to read the program instruction in the memory, and execute the method for generating the character action according to any one of the first aspect according to the program instruction in the memory.
In a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium, characterized in that,
a computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the method of generating a character action as described in any one of the first aspects above.
The character action generating method, the character action generating device and the terminal equipment provided by the embodiment of the invention collect the expression information input by the user; wherein, the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person; extracting keywords related to the person from the expression information; obtaining an object model corresponding to the keyword from a pre-stored object model library; and then, processing the object model according to the expression information, thereby obtaining limb action information of the object model. Therefore, after the object model corresponding to the keyword is obtained, the object model can be directly processed according to the expression information, so that limb motion information of the object model is obtained.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present invention;
FIG. 2 is a flowchart of a method for generating a character action according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another method for generating a character action according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a device for generating a character action according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another device for generating character actions according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Specific embodiments of the present disclosure have been shown by way of the above drawings and will be described in more detail below. These drawings and the written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the disclosed concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method for generating the character action provided by the embodiment of the invention can be applied to the voiced novels, for example, please refer to fig. 1, fig. 1 is a schematic diagram of an application scene provided by the embodiment of the invention, and when a user a listens to the voiced novels through a terminal device (such as a mobile phone), in order to improve the reading experience of the user a, the three-dimensional scene of the novels can be synchronously displayed on the mobile phone while the user a listens to the voiced novels. For example, when the mobile phone collects text information "a 10 year old girl with a height of 145 cm and a weight of 40 kg in a novel, wears a small skirt, walks around, suddenly sees his own mother, and then cradles the gift to his own mother", the mobile phone may construct a three-dimensional scene including the girl, which may include other information such as limb movement change information (from walking to running) of the girl. In order to display character limb motion changes in a three-dimensional scene and improve the acquisition efficiency of character limb motion changes, in the embodiment of the invention, expression information input by a user can be acquired first; wherein, the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person; extracting keywords related to the person from the expression information; obtaining an object model corresponding to the keyword from a pre-stored object model library; and then, processing the object model according to the expression information, thereby obtaining limb action information of the object model. Compared with the character model corresponding to the limb actions manually combined in the prior art, the method has the advantages that the limb action change of a complete set of character model can be obtained, and the acquisition efficiency of the character limb action change in the three-dimensional scene is improved on the basis of realizing automatic construction of the three-dimensional scene.
The following describes the technical scheme of the present invention and how the technical scheme of the present invention solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes will not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
Fig. 2 is a flow chart of a method for generating a character action according to an embodiment of the present invention, where the method for generating a character action may be used for virtual reality and/or augmented reality, and the method for generating a character action may be performed by a device for generating a character action, and the device for generating a character action may be provided independently or may be integrated in a processor. Referring to fig. 2, the method for generating the character action may include:
s201, collecting expression information input by a user.
Wherein, the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person.
The expression information input by the user can be input in a text mode, namely text information; of course, the input may also be by voice, i.e. voice information. For example, when the terminal device collects text information input by a user, the terminal device can collect the text information input by the user through a screen of the terminal device; when the terminal equipment collects voice information input by a user, the microphone of the terminal equipment can collect the voice information input by the user. The expression information comprises at least one keyword related to the person, and the keyword related to the person comprises at least one keyword related to the limb action of the person.
It should be noted that, the expression information in the embodiment of the present invention may be a sentence, or may be a paragraph composed of multiple sentences, or may be a complete text composed of multiple sentences.
S202, extracting keywords related to the person from the expression information.
After the terminal device collects the expression information input by the user through S201, the keyword related to the person in the expression information may be extracted. By way of example, the keywords related to the person may be words representing the person's age, height, weight, arm movements, leg movements, etc.
S203, acquiring an object model corresponding to the keyword from a pre-stored object model library.
The object model corresponding to the keyword may be a three-dimensional model.
Before obtaining the object model corresponding to the keyword, it is necessary to build an object model library in advance, where a plurality of keywords and object models corresponding to the keywords are stored. For a keyword, it may correspond to one or more object models. For example, for the keyword "running," the corresponding object model may be a running male character model, a running female character model, or a character model running in a different posture. Of course, a plurality of keywords may correspond to one object model. The more keywords related to the characters are acquired, the higher the accuracy of the corresponding object model acquired in the object model library.
After extracting the keywords related to the person from the expression information in S202, the object model corresponding to the keywords may be searched in the object model library established in advance, so as to obtain the object model corresponding to the keywords.
S204, processing the object model according to the expression information to obtain limb action information of the object model.
After the object model corresponding to the keyword is acquired in the object model library stored in advance through S203, the object model may be combined with the expression information, thereby obtaining limb motion information of the object model. For example, if the expression information includes "a person walks around and suddenly runs around", the limb motion information of the object model in the expression information can be obtained by performing a sequence combination process on the walking character model and the running character model by combining the expression information after the walking character model and the running character model are respectively acquired. Compared with the character model corresponding to the limb actions manually combined in the prior art, the method has the advantages that the limb action change of a complete set of character model can be obtained, and the acquisition efficiency of the character limb action change in the three-dimensional scene is improved on the basis of realizing automatic construction of the three-dimensional scene.
The method for generating the character action provided by the embodiment of the invention is characterized by collecting the expression information input by the user; wherein, the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person; extracting keywords related to the person from the expression information; obtaining an object model corresponding to the keyword from a pre-stored object model library; and then, processing the object model according to the expression information, thereby obtaining limb action information of the object model. Therefore, according to the character motion generation method provided by the embodiment of the invention, after the object model corresponding to the keyword is obtained, the object model can be directly processed according to the expression information, so that the limb motion information of the object model is obtained, and compared with the character model corresponding to the limb motions manually combined in the prior art, the character motion generation method can obtain the limb motion change of a complete character model, and the acquisition efficiency of the character limb motion change in the three-dimensional scene is improved on the basis of realizing automatic construction of the three-dimensional scene.
In order to more clearly illustrate the method for generating a character action according to the embodiment of the present invention, referring to fig. 3, fig. 3 is a flowchart of another method for generating a character action according to the embodiment of the present invention, and in the embodiment shown in fig. 3, taking the expression information input by the user as text information as an example, the method for generating a character action may further include:
s301, collecting text information input by a user.
Wherein, the expression information input by the user comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person.
Similarly, the text information in the embodiment of the invention can be a sentence, a paragraph composed of multiple sentences, or a complete text composed of multiple sentences. The text information includes at least one keyword related to the person, and the keyword related to the person includes at least one keyword related to the action of the limb of the person.
Alternatively, the terminal device may collect the text information input by the user through the screen of the terminal device, and of course, may collect the text information input by the user in other manners, where the embodiment of the present invention is only described by taking the text information input by the user through the screen of the terminal device as an example, but the embodiment of the present invention is not limited thereto.
After the terminal device collects the text information input by the user, the terminal device may extract keywords related to the person in the text information, and optionally, in the embodiment of the present invention, the extracting of the keywords related to the person in the text information may be implemented through the following S302-S303:
s302, word segmentation processing is carried out on the text information according to the semantic model, and a phrase is obtained.
After the text information input by the user is collected in S301, the text information may be subjected to word segmentation according to the semantic model, so as to obtain a phrase. It should be noted that, the method for performing word segmentation processing on text information through the semantic model may refer to a method disclosed in the prior art, and the embodiments of the present invention are not described herein again.
For example, referring to fig. 1, after a terminal device acquires text information "a 10 year old child with a height of 145 cm and a weight of 40 kg wears a small skirt, walks around, suddenly sees his own mother, and then, cradles a gift to run to his own mother", word segmentation processing is performed on the text information through a semantic model to obtain a plurality of phrases, where the plurality of phrases at least includes: height, 145 cm, weight, 40 kg, age 10, girl, wear, skirt, walk, suddenly, see, own, mother, cradling, gift, running, etc.
S303, extracting keywords related to the characters from the phrase.
After word segmentation is carried out on the text information according to the semantic model to obtain a phrase, keywords related to the person can be extracted from the obtained phrase.
It should be noted that, in the above S301-S303, only the expression information is taken as text information, and how to extract keywords related to the person in the text information is described as an example, of course, the expression information may also be voice information, and when the expression information is voice information, voice recognition may be performed on the voice information first to obtain text information corresponding to the voice information, so that voice information input by a user is converted into text information corresponding to the voice information, and then, a manner of extracting keywords related to the person in the text information is the same as that of the above S302-S303, which may be referred to the description in the above S302-S303, and the embodiments of the present invention will not be repeated herein.
For example, after word segmentation processing is performed in S302 to obtain phrases such as height, 145 cm, weight, 40 kg, age 10, child, wearing, skirt, walking, suddenly, seeing, oneself, mother, cradling, gift, running, etc., keywords related to the person can be extracted from the phrases. As can be seen, the keywords related to the person are: height, 145 cm, weight, 40 kg, age 10, child, dress, skirt, walk, see, hold, gift, run.
S304, collecting keywords related to people, and collecting object models corresponding to the keywords.
Before acquiring the limb motion information of the object model corresponding to the keyword, the keyword corresponding to the character needs to be collected first, and the object model corresponding to the keyword needs to be collected. When the object models corresponding to the keywords are collected, one keyword may correspond to a plurality of object models, and of course, a plurality of keywords may also correspond to one object model.
Alternatively, the attributes related to the person may include height, weight, age, wearing, arm motion, leg motion, etc., and the words used to represent these attributes may be understood as keywords related to the person. After keywords related to the persona are determined, object models corresponding to the keywords may be further collected. Optionally, in an embodiment of the present invention, collecting object models corresponding to keywords may include: collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering a plurality of different object models serving as training samples to obtain the object model corresponding to the keyword. For example, for the keyword "station", limb actions of stations corresponding to different users may be different, so when determining an object model corresponding to the keyword "station", different object models corresponding to a plurality of users may be trained and clustered, thereby obtaining the object model corresponding to the keyword "station".
S305, establishing an object model library.
The object model library comprises association relations between keywords and object models.
After collecting keywords related to persons, respectively, and collecting object models corresponding to the keywords, through S304, an object model library may be built according to the association relationship between the keywords and the object models.
It should be noted that, there is no sequence between S301-S303 and S304-S305, and S301-S303 may be executed first, and then S304-S305 may be executed. Of course, S304-S305 may be performed first, followed by S301-S303; of course, S301-S303 and S304-S305 may also be performed simultaneously. The embodiment of the present invention will be described with reference to S301-S303 being executed first and S304-S305 being executed second, but the embodiment of the present invention is not limited thereto. In general, S304-S305 may be performed first, that is, keywords related to characters may be collected first, object models corresponding to the keywords may be collected, and an object model library may be built, and instead of performing S304-S305 each time limb motion information of the object model is acquired, an object model library may be built when limb motion information of the object model is acquired for the first time, and then when new keywords and corresponding object models are generated, the new keywords and corresponding object models may be added to the object model library, so that the object model library may be updated.
S306, acquiring an object model corresponding to the keyword from a pre-stored object model library.
When the object model corresponding to the keyword is obtained from the pre-established object model library, if one keyword corresponds to at least two object models in the pre-stored corresponding model library, one of the at least two object models may be arbitrarily selected as the object model corresponding to the keyword, or the at least two object models may be averaged, thereby obtaining the object model corresponding to the keyword. Of course, if there is a further restriction on the object model in the text information, an object model matching the text information may be selected from at least two object models.
For example, after extracting the keywords related to the person in S303, height, 145 cm, weight, 40 kg, 10 years old, girl, wearing, skirt, walking, seeing, cradling, gifts, running, and creating an object model library in S305, the model of the walking girl, cradling, the model of the wearing girl cradling, the height of 145 cm, weight of 40 kg, 10 years old, and the model of the wearing girl cradling, the weight of 40 kg, 10 years old, cradling the gifts, may be searched and obtained in a pre-created object model library according to the keywords related to the person; after the two models are obtained separately, the two models may be processed to obtain the limb movement information of the child model by combining the text information, please see the following S307:
s307, processing the object model according to the expression information to obtain limb action information of the object model.
After the object model corresponding to the keyword is acquired in the object model library stored in advance through S306, the object model may be combined with the expression information, thereby obtaining limb motion information of the object model. Compared with the character model corresponding to the limb actions manually combined in the prior art, the method has the advantages that the limb action change of a complete set of character model can be obtained, and the acquisition efficiency of the character limb action change in the three-dimensional scene is improved on the basis of realizing automatic construction of the three-dimensional scene.
For example, a child model of a child wearing a dress of a gift is found and obtained in a pre-stored object model library, the child model wearing a dress of a gift is cradled, the child wearing a dress of a gift is found and obtained, and after the child model wearing a dress of a child wearing a gift is cradled, the child wearing a dress of a child wearing a dress of a child, the child wearing a dress of a child wearing a parent wearing a child, suddenly seeing a mother of a child, the child wearing a dress of a gift can obtain, the child first performs a walking action, and then performs a running action, so that the child wearing a dress of a child can combine the two models in a sequence and combine manner, thereby obtaining the action information of the child model. Compared with the prior art that the limb movement change of a complete set of girls can be obtained only by manually combining the limb movement corresponding girl models, the method improves the acquisition efficiency of the limb movement change of the girl in the three-dimensional scene on the basis of realizing automatic construction of the three-dimensional scene.
Fig. 4 is a schematic structural diagram of a generating device 40 for human actions according to an embodiment of the present invention, referring to fig. 4, the generating device 40 for human actions may be applied to virtual reality and/or augmented reality, and the generating device 40 for human actions may include:
the acquisition unit 401 is used for acquiring the expression information input by the user; wherein, the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person.
An acquisition unit 402 for extracting keywords related to the person from the expression information.
The obtaining unit 402 is further configured to obtain an object model corresponding to the keyword from a pre-stored object model library.
The processing unit 403 is configured to process the object model according to the expression information, so as to obtain limb motion information of the object model.
Optionally, the generating device 40 for person actions may further include an establishing unit 404, as shown in fig. 5, and fig. 5 is a schematic structural diagram of another generating device for person actions according to an embodiment of the present invention.
The collection unit 401 is further configured to collect keywords related to a person, and collect object models corresponding to the keywords.
And the establishing unit 404 is configured to establish an object model library, where the object model library includes an association relationship between the keyword and the object model.
Optionally, the collecting unit 401 is specifically configured to collect a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering a plurality of different object models serving as training samples to obtain the object model corresponding to the keyword.
Optionally, the collecting unit 401 is specifically configured to collect text information input by a user.
Correspondingly, the obtaining unit 402 is specifically configured to perform word segmentation processing on the text information according to the semantic model to obtain a phrase; and extracting keywords related to the person from the phrase.
Optionally, the collecting unit 401 is specifically configured to collect voice information input by a user.
Correspondingly, the obtaining unit 402 is specifically configured to perform voice recognition on the voice information to obtain text information; word segmentation processing is carried out on the text information according to the semantic model, so that a phrase is obtained; and extracting keywords related to the person from the phrase.
The device 40 for generating a character action according to the embodiment of the present invention may execute the technical scheme of the method for generating a character action according to any of the embodiments described above, and its implementation principle and beneficial effects are similar, and will not be described herein.
Fig. 6 is a schematic structural diagram of a terminal device 60 according to an embodiment of the present invention, and referring to fig. 6, the terminal device 60 may include a processor 601 and a memory 602. Wherein,,
the memory 602 is used to store program instructions.
The processor 601 is configured to read the program instructions in the memory 602, and execute the method for generating the character action according to any of the above embodiments according to the program instructions in the memory 602.
The terminal device 60 in the embodiment of the present invention may execute the technical scheme of the method for generating the action of the person in any of the embodiments described above, and its implementation principle and beneficial effects are similar, and will not be described herein.
The embodiment of the present invention also provides a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method for generating the action of the person shown in any of the foregoing embodiments is executed, and its implementation principle and beneficial effects are similar, and will not be repeated here.
The processor in the above embodiments may be a general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a memory medium well known in the art such as random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable read-only memory, or electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads instructions from the memory and, in combination with its hardware, performs the steps of the method described above.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method of generating a character action, the method for virtual reality and/or augmented reality, the method comprising:
acquiring expression information input by a user, wherein the expression information is text information; wherein the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person;
word segmentation processing is carried out on the text information according to the semantic model, so that a phrase is obtained;
extracting the keywords related to the people from the phrase;
obtaining an object model corresponding to the keyword from a pre-stored object model library;
processing the object model corresponding to the keyword according to the expression information to obtain limb action information of the character;
the obtaining the object model corresponding to the keyword in the pre-stored object model library comprises the following steps:
obtaining object models corresponding to keywords related to the limb actions of the characters from a pre-stored object model library;
the processing the object model corresponding to the keyword according to the expression information to obtain the limb action information of the character comprises the following steps:
and combining object models corresponding to the keywords related to the limb actions of the person to obtain the limb action change information of the person.
2. The method according to claim 1, wherein before the object model corresponding to the keyword is acquired from the pre-stored object model library, the method further comprises:
collecting keywords related to the person, and collecting object models corresponding to the keywords;
and establishing an object model library, wherein the object model library comprises association relations between keywords and object models.
3. The method of claim 2, wherein the collecting object models corresponding to the keywords comprises:
collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword;
and training and clustering the plurality of different object models as training samples to obtain the object model corresponding to the keyword.
4. A method according to any one of claims 1 to 3, wherein the obtaining the expression information input by the user comprises:
and collecting the text information input by the user.
5. A method according to any one of claims 1 to 3, wherein the obtaining the expression information input by the user comprises:
collecting voice information input by a user;
and carrying out voice recognition on the voice information to obtain the text information.
6. A generation apparatus of character actions, wherein the apparatus is for virtual reality and/or augmented reality, the apparatus comprising:
the acquisition unit is used for acquiring expression information input by a user, wherein the expression information is text information; wherein the expression information comprises keywords related to the person, and the keywords related to the person comprise keywords related to the limb actions of the person;
the obtaining unit is used for performing word segmentation processing on the text information according to the semantic model to obtain a phrase; extracting the keywords related to the people from the phrase;
the obtaining unit is further used for obtaining an object model corresponding to the keyword from a pre-stored object model library;
the processing unit is used for processing the object model corresponding to the keyword according to the expression information to obtain limb action information of the character;
the acquisition unit is specifically further used for acquiring each object model corresponding to the keywords related to the limb actions of the person from a pre-stored object model library;
the processing unit is specifically configured to combine object models corresponding to keywords related to the limb actions of the person to obtain limb action change information of the person.
7. The apparatus of claim 6, further comprising a setup unit;
the collecting unit is further used for collecting keywords related to the person and collecting object models corresponding to the keywords;
the establishing unit is used for establishing an object model library, and the object model library contains the association relation between the keywords and the object model.
8. The apparatus of claim 7, wherein the device comprises a plurality of sensors,
the acquisition unit is specifically used for collecting a plurality of different object models when a plurality of users express limb actions corresponding to the same keyword; and training and clustering the plurality of different object models as training samples to obtain the object model corresponding to the keyword.
9. The device according to any one of claims 6 to 8, wherein,
the acquisition unit is specifically used for acquiring the text information input by the user.
10. The device according to any one of claims 6 to 8, wherein,
the acquisition unit is specifically used for acquiring voice information input by a user;
correspondingly, the acquiring unit is specifically configured to perform voice recognition on the voice information to obtain the text information.
11. A terminal device comprising a processor and a memory, wherein,
the memory is used for storing program instructions;
the processor is configured to read the program instructions in the memory, and execute the method for generating the character action according to any one of claims 1 to 5 according to the program instructions in the memory.
12. A computer-readable storage medium comprising,
a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method for generating a character action according to any one of claims 1 to 5.
CN201810720342.9A 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment Active CN108986191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810720342.9A CN108986191B (en) 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810720342.9A CN108986191B (en) 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN108986191A CN108986191A (en) 2018-12-11
CN108986191B true CN108986191B (en) 2023-06-27

Family

ID=64536039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810720342.9A Active CN108986191B (en) 2018-07-03 2018-07-03 Character action generation method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN108986191B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918509B (en) * 2019-03-12 2021-07-23 明白四达(海南经济特区)科技有限公司 Scene generation method based on information extraction and storage medium of scene generation system
CN113313792A (en) * 2021-05-21 2021-08-27 广州幻境科技有限公司 Animation video production method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010105216A2 (en) * 2009-03-13 2010-09-16 Invention Machine Corporation System and method for automatic semantic labeling of natural language texts
CN104268166A (en) * 2014-09-09 2015-01-07 北京搜狗科技发展有限公司 Input method, device and electronic device
CN104317389A (en) * 2014-09-23 2015-01-28 广东小天才科技有限公司 Method and device for identifying character role through action
CN106710590A (en) * 2017-02-24 2017-05-24 广州幻境科技有限公司 Voice interaction system with emotional function based on virtual reality environment and method
CN106951881A (en) * 2017-03-30 2017-07-14 成都创想空间文化传播有限公司 A kind of three-dimensional scenic rendering method, apparatus and system
CN107272884A (en) * 2017-05-09 2017-10-20 聂懋远 A kind of control method and its control system based on virtual reality technology
CN108170278A (en) * 2018-01-09 2018-06-15 三星电子(中国)研发中心 Link up householder method and device

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2828572A1 (en) * 2001-08-13 2003-02-14 Olivier Cordoleani Method for creating a virtual three-dimensional person representing a real person in which a database of geometries, textures, expression, etc. is created with a motor then used to manage movement and expressions of the 3-D person
CN101482976B (en) * 2009-01-19 2010-10-27 腾讯科技(深圳)有限公司 Method for driving change of lip shape by voice, method and apparatus for acquiring lip cartoon
CN102903142A (en) * 2012-10-18 2013-01-30 天津戛唛影视动漫文化传播有限公司 Method for realizing three-dimensional augmented reality
CN103646425A (en) * 2013-11-20 2014-03-19 深圳先进技术研究院 A method and a system for body feeling interaction
CN104461215A (en) * 2014-11-12 2015-03-25 深圳市东信时代信息技术有限公司 Augmented reality system and method based on virtual augmentation technology
CN104866308A (en) * 2015-05-18 2015-08-26 百度在线网络技术(北京)有限公司 Scenario image generation method and apparatus
CN105551084B (en) * 2016-01-28 2018-06-08 北京航空航天大学 A kind of outdoor three-dimensional scenic combination construction method of image content-based parsing
CN205721630U (en) * 2016-04-26 2016-11-23 西安智道科技有限责任公司 A kind of new media promotes the man-machine interaction structure of machine
CN107316343B (en) * 2016-04-26 2020-04-07 腾讯科技(深圳)有限公司 Model processing method and device based on data driving
US10579940B2 (en) * 2016-08-18 2020-03-03 International Business Machines Corporation Joint embedding of corpus pairs for domain mapping
CN106951095A (en) * 2017-04-07 2017-07-14 胡轩阁 Virtual reality interactive approach and system based on 3-D scanning technology

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010105216A2 (en) * 2009-03-13 2010-09-16 Invention Machine Corporation System and method for automatic semantic labeling of natural language texts
CN104268166A (en) * 2014-09-09 2015-01-07 北京搜狗科技发展有限公司 Input method, device and electronic device
CN104317389A (en) * 2014-09-23 2015-01-28 广东小天才科技有限公司 Method and device for identifying character role through action
CN106710590A (en) * 2017-02-24 2017-05-24 广州幻境科技有限公司 Voice interaction system with emotional function based on virtual reality environment and method
CN106951881A (en) * 2017-03-30 2017-07-14 成都创想空间文化传播有限公司 A kind of three-dimensional scenic rendering method, apparatus and system
CN107272884A (en) * 2017-05-09 2017-10-20 聂懋远 A kind of control method and its control system based on virtual reality technology
CN108170278A (en) * 2018-01-09 2018-06-15 三星电子(中国)研发中心 Link up householder method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种自然语言关键词的人机交互方法;赵宇婧;许鑫泽;朱齐丹;张智;;应用科技;第43卷(第06期);1-6 *

Also Published As

Publication number Publication date
CN108986191A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
US11790589B1 (en) System and method for creating avatars or animated sequences using human body features extracted from a still image
CN111833418B (en) Animation interaction method, device, equipment and storage medium
CN111680562A (en) Human body posture identification method and device based on skeleton key points, storage medium and terminal
CN108491486B (en) Method, device, terminal equipment and storage medium for simulating patient inquiry dialogue
CN108345385A (en) Virtual accompany runs the method and device that personage establishes and interacts
CN109815776B (en) Action prompting method and device, storage medium and electronic device
CN111414506B (en) Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
CN106527695B (en) A kind of information output method and device
CN111652987A (en) Method and device for generating AR group photo image
CN114241558B (en) Model training method, video generating method and device, equipment and medium
CN109343695A (en) Exchange method and system based on visual human's behavioral standard
CN108986191B (en) Character action generation method and device and terminal equipment
CN111191503A (en) Pedestrian attribute identification method and device, storage medium and terminal
CN110314344A (en) Move based reminding method, apparatus and system
CN111447379B (en) Method and device for generating information
WO2024066549A1 (en) Data processing method and related device
CN108961431A (en) Generation method, device and the terminal device of facial expression
CN108932336A (en) Information recommendation method, electric terminal and computer readable storage medium message
CN114510942A (en) Method for acquiring entity words, and method, device and equipment for training model
CN115116085A (en) Image identification method, device and equipment for target attribute and storage medium
CN112333464B (en) Interactive data generation method and device and computer storage medium
KR102660366B1 (en) Sign language assembly device and operation method thereof
CN116152900B (en) Expression information acquisition method and device, computer equipment and storage medium
CN116955835B (en) Resource screening method, device, computer equipment and storage medium
CN115705628A (en) Image processing method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant