CN116882372A - Text generation method, device, electronic equipment and storage medium - Google Patents
Text generation method, device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN116882372A CN116882372A CN202310876300.5A CN202310876300A CN116882372A CN 116882372 A CN116882372 A CN 116882372A CN 202310876300 A CN202310876300 A CN 202310876300A CN 116882372 A CN116882372 A CN 116882372A
- Authority
- CN
- China
- Prior art keywords
- text
- candidate
- subject
- abstract
- knowledge base
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000011218 segmentation Effects 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 14
- 238000010276 construction Methods 0.000 claims description 7
- 230000009469 supplementation Effects 0.000 claims description 3
- 230000001502 supplementing effect Effects 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000003058 natural language processing Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 10
- 230000003993 interaction Effects 0.000 description 6
- 239000013598 vector Substances 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012015 optical character recognition Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 208000003443 Unconsciousness Diseases 0.000 description 2
- 238000013475 authorization Methods 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007115 recruitment Effects 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 240000005373 Panax quinquefolius Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/166—Editing, e.g. inserting or deleting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/31—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3346—Query execution using probabilistic model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/284—Lexical analysis, e.g. tokenisation or collocates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure provides a text generation method, a text generation device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, and particularly relates to the technical field of natural language processing, deep learning and large language models. The text generation method comprises the following specific implementation scheme: according to semantic relativity between adjacent words in the question text, word segmentation is carried out on the question text to obtain a plurality of subject words; obtaining reference texts corresponding to a plurality of subject matters by inquiring a target knowledge base according to the plurality of subject matters, wherein the target knowledge base is constructed according to historical viewpoint contents published by a target object; and generating a reply text for replying to the question text according to the question text and the reference text.
Description
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical field of natural language processing, deep learning and large language models, and specifically relates to a text generation method, a text generation device, electronic equipment and a storage medium.
Background
LLM (Large Language Model ), a generative model based on machine learning and natural language processing techniques, learns the ability to serve human language understanding and generation by pre-training large amounts of text data.
For different application scenes, the Prompt input (promt) of the large language model can be finely tuned or adjusted, so that the large language model has the natural language understanding and generating capability of the application scene.
Disclosure of Invention
The disclosure provides a text generation method, a text generation device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a text generation method including: according to semantic relativity between adjacent words in the question text, word segmentation is carried out on the question text to obtain a plurality of subject words; obtaining reference texts corresponding to a plurality of subject matters by inquiring a target knowledge base according to the plurality of subject matters, wherein the target knowledge base is constructed according to historical viewpoint contents published by a target object; and generating a reply text for replying to the question text according to the question text and the reference text.
According to another aspect of the present disclosure, there is provided a text generating apparatus including: the system comprises a word segmentation module, a query module and a first generation module. And the word segmentation module is used for segmenting the problem text according to the semantic relativity between adjacent words in the problem text to obtain a plurality of subject words. And the query module is used for obtaining reference texts corresponding to the plurality of subject matters by querying a target knowledge base according to the plurality of subject matters, wherein the target knowledge base is constructed according to the historical viewpoint content published by the target object. The first generation module is used for generating a reply text for replying to the question text according to the question text and the reference text.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method as above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as above.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 schematically illustrates an exemplary system architecture to which text generation methods and apparatus may be applied, according to embodiments of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a text generation method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of a text generation method according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of knowledge extraction from historical perspective content of a target object in accordance with an embodiment of the disclosure;
FIG. 5 schematically illustrates a schematic diagram of a knowledge search based on subject matter, in accordance with an embodiment of the present disclosure;
FIG. 6 schematically illustrates a schematic diagram of a text generation method according to another embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of a text generating apparatus according to an embodiment of the present disclosure; and
fig. 8 schematically illustrates a block diagram of an electronic device adapted to implement a text generation method according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
On a public network platform, users can ask questions in a comment area aiming at topics published by an author, and when the number of questions to be replied in the comment area is too large, the efficiency of the author to manually reply comments is low.
In order to improve the comment reply efficiency, related technologies use the question-reply interaction capability of a large language model to reply the questions in the comment area automatically by calling a generated Large Language Model (LLM).
However, the reply content directly generated by the target based on the large language model is universal, does not have the personalized style of the creator, and cannot clearly show the personal view of the creator, so that the interaction requirement of the user and the creator is difficult to meet.
In view of this, an embodiment of the present disclosure provides a text generation method, which obtains a reference text corresponding to a question by querying a target knowledge base, and generates a reply text based on the question text and the reference text. Because the target knowledge base is constructed according to the historical viewpoint content published by the creator, the personal viewpoints of the creator are collected, so that reply texts with personalized styles of the creator can be generated, the interaction requirement of users and the creator is met, and the user experience is improved.
Fig. 1 schematically illustrates an exemplary system architecture to which text generation methods and apparatus may be applied, according to embodiments of the present disclosure.
It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios. For example, in another embodiment, an exemplary system architecture to which the text generation method and apparatus may be applied may include a terminal device, but the terminal device may implement the text generation method and apparatus provided by the embodiments of the present disclosure without interacting with a server.
As shown in fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 101, 102, 103, such as a knowledge reading class application, a web browser application, a search class application, an instant messaging tool, a mailbox client and/or social platform software, etc. (as examples only).
The terminal devices 101, 102, 103 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for content browsed by the user using the terminal devices 101, 102, 103. The background management server may analyze and process the received data such as the user request, and feed back the processing result (e.g., the web page, information, or data obtained or generated according to the user request) to the terminal device.
Note that, the text generation method provided by the embodiment of the present disclosure may be generally performed by the terminal device 101, 102, or 103. Accordingly, the text generating apparatus provided by the embodiments of the present disclosure may also be provided in the terminal device 101, 102, or 103.
Alternatively, the text generation method provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the text generating apparatus provided by the embodiments of the present disclosure may be generally provided in the server 105. The text generation method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the text generating apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
For example, when a user makes a question in a comment area, the terminal devices 101, 102, 103 may acquire a question text of the comment area and creator information of the comment area, then send the acquired question text and creator information to the server 105, and the server 105 performs word segmentation on the question text to obtain a plurality of subject words; determining a target knowledge base according to the creator information; according to the multiple subject terms, obtaining reference texts corresponding to the multiple subject terms by querying a target knowledge base; and generating a reply text for replying to the question text according to the question text and the reference text. Or by a server or cluster of servers capable of communicating with the terminal devices 101, 102, 103 and/or the server 105, and ultimately enabling the generation of reply text having the personal style of the creator.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing, applying and the like of the personal information of the user all conform to the regulations of related laws and regulations, necessary security measures are adopted, and the public order harmony is not violated.
In the technical scheme of the disclosure, the authorization or consent of the user is obtained before the personal information of the user is obtained or acquired.
Fig. 2 schematically illustrates a flow chart of a text generation method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S230.
In operation S210, the question text is segmented according to semantic relativity between neighboring words in the question text, to obtain a plurality of subject words.
In operation S220, reference texts corresponding to the plurality of subject words are obtained by querying the target knowledge base according to the plurality of subject words.
In operation S230, a reply text for replying to the question text is generated according to the question text and the reference text.
According to embodiments of the present disclosure, the question text may be question content that a user initiates in the comment area of the creator. According to the semantic relativity between adjacent words in the question text, the question text is segmented, so that the semantic integrity of each subject word obtained after segmentation can be realized.
For example: the question text may be "when is the teacher's examination score publication time in the first province, second province? "after word segmentation according to semantic relativity between adjacent words, the obtained subject word may include: "A-B district", "Master Sheng", "examination result", "publication time".
According to the embodiment of the disclosure, the words are segmented according to the semantic relativity between the adjacent words, so that invalid search contents can be effectively avoided. For example: the place name "a province b district" is divided into two subject words "a province" and "b district", and there may be a lot of publication time of the test results of the a province students in the search result.
According to an embodiment of the present disclosure, the target knowledge base is constructed from historical perspective content of which the target object has been published. The target object may be the author of the reply text, for example: the method can be a blogger of an comment area where the user makes a question, or can be an object which is mentioned in the text of the user problem and is expected to reply to the problem. For example: "what is the test condition reported by the teacher in the second district of the first province? @ a teacher ", the a teacher in the question text may be the target object.
According to an embodiment of the present disclosure, the historical viewpoint content that the target object has published may be viewpoint content that the target object has published in various manners such as text, image, video, audio, and the like.
For example: the video content may be: "career class for a teacher", the audio content may be: the picture and text content of the 'A teacher' can be an article with combined pictures and texts which are published by the A teacher on the self-media public number.
It should be noted that, in the embodiments of the present disclosure, after the authorization of the creator is obtained, the historical viewpoint content of the creator is obtained to construct the target knowledge base, which accords with the requirements of the related law.
According to the embodiment of the disclosure, according to a plurality of subject matters, reference text which can be used for replying to the problem can be obtained by querying a target knowledge base. Since the target knowledge base integrates historical viewpoint content of the creator, the reference text is also content text with the creator's personalized viewpoint.
According to embodiments of the present disclosure, question text, reference text, and text generation instructions may be assembled into a prompt (prompt) input Large Language Model (LLM) to generate a reply text for replying to the question text.
In accordance with embodiments of the present disclosure, in generating a reply text, a large language model may randomly add some prefix-suffix association sentences before and after the reply text, for example: "reply=", the text generated by the large language model can be analyzed, and the associated sentences of the prefix and the suffix randomly added by the model are deleted, so that the reply text is obtained.
According to the embodiment of the disclosure, a reference text corresponding to a question is obtained by querying a target knowledge base, and a reply text is generated based on the question text and the reference text. Because the target knowledge base is constructed according to the historical viewpoint content published by the creator, the personal viewpoints of the creator are collected, so that reply texts with personalized styles of the creator can be generated, the interaction requirement of users and the creator is met, and the user experience is improved.
In the actual application scenario, the content quality in the evaluation area is uneven, for example: a speaker that may contain negative emotions may or may not have an explicit intent to ask or semantic incompleteness of the content of the question.
For comments containing negative emotions or having no explicit questioning intention, the contents of the comment areas can be screened by constructing a blacklist word list, a regular expression of an invalid sentence pattern and the like.
Context information of a question text can be obtained aiming at comments with incomplete semantics of the questioning content; and supplementing the information of the problem text according to the context information of the problem text, and generating the problem text with complete semantics.
For example: the comment content under the content explaining "provincial students report the examination conditions" may be "is only the region of the entertaining the third? The question text may be "only in the region of the question," and obviously the semantics of the question are not complete and the user's actual question intent cannot be expressed based on the current question. Therefore, it is possible to acquire the context information "the examination-proving teacher's examination report condition" of the problem, and to "only the region of the third of the region is recruited" in combination with the context information "the examination-proving teacher's examination report condition? Is information supplementation performed and a semantically complete question text generated, and is only a region of provincial students to entertain the region of the provincial students? "
According to embodiments of the present disclosure, the question text and the context information may be used as inputs to a pre-trained LLM model to generate a semantically complete question text.
According to the embodiment of the disclosure, the text of the question is supplemented based on the context of the text of the question, so that the semantic integrity of the text of the question can be improved, the real question intention of the user can be obtained, and the relevance of the reply text and the text of the question can be improved.
Fig. 3 schematically illustrates a schematic diagram of a text generation method according to an embodiment of the present disclosure.
As shown in fig. 3, in embodiment 300, context information + questions 311 may be enhanced with data to obtain semantically complete questions 312. The question 312 with complete semantics is segmented to obtain a subject term 313. Based on the subject term 313, knowledge 326 that is most relevant to the problem is obtained by querying the target knowledge base 325. The semantically complete question 312 and knowledge 326 most relevant to the question are entered as hints into a large language model, generating reply text 327.
The target knowledge base 325 is constructed based on the content 321 published by the authorized creator, and the content 321 published by the authorized creator can obtain the history text 322 through multi-modal recognition, and extract knowledge from the history text to obtain the paragraph text-paragraph abstract 323_1 and the chapter topic-chapter abstract 323_2 in question and answer form. By extracting the retrieval feature, the retrieval feature-knowledge 324 is obtained, and the extracted knowledge and the retrieval feature are put in a warehouse, to obtain the target knowledge base 325.
According to the embodiment of the disclosure, the historical viewpoint content of the published target object can be in various formats such as video, audio, graphics and texts, and thus can be identified by a multi-mode identification technology, for example: OCR (Optical Character Recognition ), ASR (Automatic Speech Recognition, speech recognition) converts historical perspective content of different formats into text format.
Because of the input length limitation of the large language model, when the target knowledge base is introduced to strengthen knowledge of the large language model, the historical viewpoint content of the creator needs to be effectively extracted and compressed, and the loss of viewpoint knowledge in the extraction and compression process is reduced as much as possible.
For example: according to paragraph features of the history text published by the target object, segmenting the history text to obtain a plurality of text paragraphs; extracting the subject content of a plurality of text paragraphs to obtain a plurality of paragraph summaries corresponding to the text paragraphs; and constructing a target knowledge base according to the plurality of paragraph summaries and the plurality of text paragraphs.
According to the embodiment of the disclosure, since the content of each natural paragraph is usually set forth for a core theme in an article, the history text can be segmented according to the original natural paragraph of the history text, so as to obtain a plurality of text paragraphs. So as to extract a paragraph digest of the paragraph. The paragraph abstract may characterize the core topic ideas of the paragraph text.
For an article, the subject ideas of the article and the abstract content of the article can also be extracted in a question-and-answer form.
For example: extracting the subject content of the historical text to obtain a text abstract corresponding to the historical text; generating a problem text corresponding to the text abstract according to the text abstract; and constructing a target knowledge base according to the problem text and the text abstract corresponding to the text abstract.
Fig. 4 schematically illustrates a schematic diagram of knowledge extraction from historical perspective content of a target object according to an embodiment of the disclosure.
As shown in fig. 4, in an embodiment 400, the historical viewpoint content published by the creator may include graphics-what kind of boundary 441 the story line of "chu person out of bow" and audio-video "small a story" chu person out of bow ". The historical view content is converted into a historical text 443 by multimodal recognition.
The result of knowledge extraction by paragraph is shown as 444: paragraph abstract: and (3) at the julian: the people can build the body and cure the country by means of the personal text. Paragraph text: 01 first we say that at first we are at the julian, the hole is on this thing, the evaluation of Chu king.
The result of knowledge extraction at chapter 445 is shown: asking for: the story of "Chu person loses bow" shows which boundaries of the three families of the Ru release channel? Answering: this story embodies the boundaries of the julian release path three families including: the "build, get up, cure, get down on the sky" of the family of the Confucius, the "empty" border of the family of the Buddha, the "equal" concept of the family of the morale.
According to the embodiment of the disclosure, knowledge extraction is performed according to paragraphs, so that knowledge loss in the knowledge extraction process can be reduced as much as possible, and a more detailed creator view is reserved. Knowledge extraction is carried out according to chapters, so that the knowledge can be compressed as much as possible, and the text length is reduced.
Because the paragraph knowledge extraction and the chapter knowledge extraction have advantages and disadvantages, the results of the two knowledge extraction can be stored in the knowledge base of the creator. Different weights can be configured for the knowledge obtained by the two extraction modes so as to meet the requirements of different application scenes on knowledge retrieval precision.
According to an embodiment of the present disclosure, the target knowledge base may include a first index knowledge base and a second index knowledge base. The first indexed knowledge base may include indexed relationships of summaries of candidate text to candidate text, for example: the first index knowledge base may be an inverted index base and the index formula may be a mapping of a summary of the candidate text to an identification code of the candidate text. The second indexed knowledge base may include the indexing relationship of semantic features of the abstract of the candidate text with the candidate text. For example: the second index knowledge base may be a semantic vector base and the index formula may be a mapping of semantic feature vectors of summaries of the candidate text to identification codes of the candidate text.
According to an embodiment of the present disclosure, obtaining reference text corresponding to a plurality of subject matters by querying a target knowledge base according to the plurality of subject matters may include the following operations: according to the multiple subject terms, a first candidate text is obtained by querying a first index knowledge base; obtaining a second candidate text by querying a second index knowledge base according to the plurality of subject words; and obtaining reference texts from the first candidate text and the second candidate text based on the semantic relatedness of the first candidate text, the second candidate text and the plurality of subject words.
According to an embodiment of the present disclosure, search results may be screened by setting recall conditions based on the search of the first index knowledge base. Based on the retrieval of the second index knowledge base, the retrieval results can be forward ordered, and then feature screening is performed.
According to an embodiment of the present disclosure, the recall condition of the first index knowledge base may be a probability of matching a subject word.
For example: matching a plurality of subject words with abstracts of a plurality of candidate texts of a first index knowledge base to obtain a first matching probability; obtaining a first target abstract from abstracts of the plurality of candidate texts based on the first matching probability; and obtaining the first candidate text based on the association relation between the abstract of the candidate text and the candidate text according to the first target abstract.
For example: the plurality of subject words may be "first and second regions", "a teacher's student", "examination time", "published", the first matching probability may be 25% for a summary of the candidate text including only one subject word described above, the first matching probability may be 50% for a summary of the candidate text including two subject words described above, and so on. The first match probability threshold may be set, for example: 75, taking the abstract of the candidate text with the first matching probability being more than 75% as a first target abstract. And obtaining a first candidate text according to the index relation between the abstract and the text.
According to the embodiment of the disclosure, candidate texts are screened based on the matching of the subject terms and the candidate text abstracts, the coverage range of the search results is comprehensive, and the omission ratio is low.
The semantic relevance of the reference text to the question is therefore the key to determine the semantic relevance of the final generated answer text to the question and therefore can be retrieved according to the semantic features of the subject matter based on the second index knowledge base.
For example: a plurality of semantic features of a plurality of subject matters can be extracted; matching a plurality of semantic features of the plurality of subject words with semantic features of abstracts of a plurality of candidate texts of a second index knowledge base to obtain a second matching probability; obtaining a second target abstract from the abstracts of the plurality of candidate texts based on the second matching probability; and obtaining a second candidate text based on the association relation between the abstract of the candidate text and the candidate text according to the second target abstract.
According to embodiments of the present disclosure, multiple semantic features of multiple subject matter words may be selectively stitched, for example: and 5 subject words are all, and semantic features of 3 subject words are randomly selected for splicing, so that the ratio of the subject words can be 60%. And performing similarity calculation based on the spliced semantic features and the semantic features of the abstract of the candidate text to obtain similarity. And obtaining a second matching probability according to the product of the subject term occupation ratio and the similarity. The digests of candidate text that are greater than the second match probability threshold may be determined to be the second target digests by setting the second match probability threshold. And obtaining a second candidate text according to the index relation between the semantic features of the abstract and the text.
According to the embodiment of the disclosure, candidate texts are screened based on matching of semantic features of the subject terms and the candidate text abstracts, and the searched second candidate texts have higher semantic relevance with the problem texts.
The search result range based on the first index knowledge base is comprehensive, and the semantic relevance between the search result based on the second index knowledge base and the problem text is high, so that the two search results can be mixed and ordered to improve the search accuracy.
In addition, in the practical application scenario, the importance of different keywords to the problem may be different, and the weight of the keywords may be configured based on the importance of the keywords to the problem.
For example: determining a plurality of weights for a plurality of subject words; obtaining the importance of the subject word according to the subject word in the abstract of the first candidate text, the subject word in the abstract of the second candidate text, the subject word in the question text and a plurality of weights; and ordering the first candidate text and the second candidate text according to the subject term importance degree to obtain a reference text.
According to an embodiment of the present disclosure, the matching probability of the subject term is the hit rate of the retrieval result from the dimension of the importance of the subject term.
For example: the topic word importance level can be calculated according to formulas (1) and (2):
cqr = sum of the weights of the subject words hit by both the candidate text abstract and the question text/sum of the weights of the subject words in the question text; (1)
Ctr=sum of the weights of the subject words hit by both the candidate text digest and the question text/sum of the weights of the subject words in the candidate text digest. (2)
Wherein Cqr represents the importance of the first hit subject term; ctr represents the importance of the second hit subject term.
According to the embodiment of the disclosure, the importance of the first hit subject term and the importance of the second hit subject term can be weighted and summed to obtain the subject term importance.
According to the embodiment of the disclosure, the first candidate text and the second candidate text can be forward-ordered according to the topic word importance degree of each candidate text, and an ordering result is obtained. The reference text may be the text ranked in the top 100 bits of the ranking result.
According to the embodiment of the disclosure, the text corresponding to the important subject term is screened out from the candidate texts in a targeted manner according to the importance of the subject term, so that the retrieval precision is improved.
In order to further screen the reference text most relevant to the question text from the candidate text, the candidate text may also be screened based on non-provincial recognition of the candidate text. The unavoidable word may refer to a word that cannot be absent in the reply text.
For example: matching the preset subject word with the abstract of the first candidate text and the abstract of the second candidate text respectively to obtain a third matching probability; and sorting the first candidate text and the second candidate text according to the subject word importance degree and the third matching probability to obtain a reference text.
According to an embodiment of the present disclosure, the predetermined subject term may be an unconscious term in the candidate text, for example: different preset subject words can be set according to the name of the person and the name of the place. The third matching probability may characterize whether the abstract of the candidate text includes non-provincial words, and is 1 when the abstract of the candidate text includes non-provincial words, or is 0 otherwise.
For example: the non-provincial word may be "Jia province", the candidate text T 1 The abstract of the method can be that the national college entrance examination score publication time is XX month XX day; candidate text T 2 The abstract may be "the A-province college entrance examination score publication time is YY month YY day". Since the non-provincial word is "Jia province", the candidate text T 1 The word is not included in the abstract of the candidate text T 2 The term is included in the abstract of (1), so that the candidate text T can be entered 2 Is determined as the reference text.
According to the embodiment of the disclosure, different weights can be configured for the keyword importance and the third matching probability, the keyword importance and the third matching probability are weighted and summed based on the weight of the keyword importance and the weight of the third matching probability, and the first candidate text and the second candidate text are ranked based on the weighted and summed result, so that a ranking result is obtained. The reference text may be the top 1000 digits of text in the ranking result.
According to embodiments of the present disclosure, by adding recognition of non-provincial words, reference text that is more relevant to the question text may be screened from the candidate text, thereby generating valuable reply text.
Since the first index knowledge base is retrieved based on matching of the subject term with the candidate text abstract, there may be text in the first candidate text that has a low semantic relevance to the question. In order to exclude the text with low semantic relevance to the question in the first candidate text, the following operations may be adopted: extracting first semantic features of abstracts of the first candidate text and second semantic features of a plurality of subject words; matching the first semantic features with the second semantic features to obtain a fourth matching probability; and sorting the first candidate text and the second candidate text according to the subject word importance degree, the third matching probability and the fourth matching probability to obtain a reference text.
According to an embodiment of the present disclosure, the second semantic feature may be a semantic feature of all subject words derived from the question text.
According to an embodiment of the present disclosure, the fourth matching probability may be a similarity of the first semantic feature and the second semantic feature. The similarity algorithm may be any algorithm that cosine similarity, euclidean distance, etc. may be applied to calculate text similarity, which is not particularly limited in the embodiments of the present disclosure.
According to the embodiment of the disclosure, different weights can be configured for the keyword importance degree, the third matching probability and the fourth matching probability, the keyword importance degree, the third matching probability and the fourth matching probability are weighted and summed based on the weight of the keyword importance degree, the weight of the third matching probability and the weight of the fourth matching probability, and the first candidate text and the second candidate text are ranked based on the result of weighted summation, so that a ranking result is obtained. The reference text may be the text ranked in the top 3000 bits of the ranking result.
According to the embodiment of the disclosure, the reference text most relevant to the question text can be obtained based on the comprehensive evaluation results of the subject word importance degree, the unconscious word recognition and the semantic relevance degree of the candidate text, so that the generation quality of the reply text is improved.
Fig. 5 schematically illustrates a schematic diagram of knowledge searching according to subject matter words, according to an embodiment of the disclosure.
As shown in fig. 5, in embodiment 500, term inverted database 522 and ANN semantic vector database 523 are searched according to subject word 521, where the search result of ANN semantic vector database 523 needs to enter forward database 524 to perform further information screening, and finally merging candidate text 525 is obtained. And analyzing according to the subject term 521 and the merging candidate text 525 to obtain the relevance 526 of the candidate text and the problem. According to the relevance 526 of the candidate text to the question, the knowledge 527 most relevant to the question is selected from the merged candidate text 525.
In embodiment 500, a knowledge-based process is further included, that is, the content 5201 published by the authorized creator is identified by a multimodal algorithm such as ASR, OCR, and the like to obtain a history text 5202. After the history text 5202 is segmented, the LLM model is input, and a paragraph text-paragraph abstract 5203_1 and a chapter topic-chapter abstract 5203_2 in a question-answer form are obtained. After processing the paragraph text-paragraph abstract 5203_1 and the chapter topic-chapter abstract 5203_2 in question-and-answer form based on the rough cut words, the parts of speech, and the Term importance (word importance), the processed text-paragraph abstract 5203_1 and the chapter topic-chapter abstract 5203_2 are stored in the Term inverted library 522. After processing the paragraph text-paragraph abstract 5203_1 and the chapter topic-chapter abstract 5203_2 in question-and-answer form based on the rough cut words, the parts of speech, the term importance (word importance), and the semantic features of the abstract, the processed text-paragraph abstract 5203_1 and the chapter topic-chapter abstract 5203_2 are stored in the ANN semantic vector library 523.
Fig. 6 schematically illustrates a schematic diagram of a text generation method according to another embodiment of the present disclosure.
As shown in fig. 6, in embodiment 600, based on the context information "what is the entry condition of the first B-school bus? "AND problem" I are also focusing on this, say that what is only a first place? "601, utilize LLM to carry out screening and refining and information supplementation, obtain the problem of semantically complete: "what is the examination condition of asking the B-site school bus? Is only the first place of land is? "602.
Based on the creator information 'A teacher career class uid', the knowledge base of the A teacher is searched to obtain a reference text { history question and answer }: asking for: what is the test report condition of the first B school bus? Answering: the examination conditions of the first school B include score, physical examination and single-department score, and … generally can be examined by the examinee above xx score. There is no restriction on the locus of the source according to the recruitment plan. Questions with complete semantics "please ask the first school B-school bus what the report conditions are? Is only the first place of land is? "and reference text { historic questions and answers: asking for: what is the test report condition of the first B school bus? Answering: the examination conditions of the first school B include score, physical examination and single-department score, and … generally can be examined by the examinee above xx score. According to the recruitment plan, there is no restriction on the source location, the..the }603 is assembled into a Prompt, which is input into the LLM model as Prompt 604, and the reply text "there is no restriction on the source location in the report condition of the first B-school public service" 605 is obtained.
Fig. 7 schematically shows a block diagram of a text generating apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the apparatus 700 may include a word segmentation module 710, a query module 720, and a first generation module 730.
The word segmentation module 710 is configured to segment the question text according to semantic relativity between adjacent words in the question text, so as to obtain a plurality of subject words;
the query module 720 is configured to obtain reference texts corresponding to the multiple subject terms by querying a target knowledge base according to the multiple subject terms, where the target knowledge base is constructed according to historical viewpoint contents published by the target object; and
the first generation module 730 is configured to generate a reply text for replying to the question text according to the question text and the reference text.
According to an embodiment of the present disclosure, the target knowledge base includes a first index knowledge base and a second index knowledge base; the query module may include: the system comprises a first query sub-module, a second query sub-module and a first obtaining sub-module.
And the first query sub-module is used for obtaining a first candidate text by querying a first index knowledge base according to the plurality of subject words, wherein the first index knowledge base comprises index relations between abstracts of the candidate text and the candidate text.
And the second query sub-module is used for obtaining a second candidate text by querying a second index knowledge base according to the plurality of subject words, wherein the second index knowledge base comprises the index relation between semantic features of abstracts of the candidate text and the candidate text.
The first obtaining sub-module is used for obtaining the reference text from the first candidate text and the second candidate text based on the semantic relativity of the first candidate text, the second candidate text and the plurality of subject words.
According to an embodiment of the present disclosure, the first query sub-module may include: the device comprises a first matching unit, a first obtaining unit and a second obtaining unit.
And the first matching unit is used for matching the plurality of subject words with abstracts of the plurality of candidate texts of the first index knowledge base to obtain a first matching probability.
And the first obtaining unit is used for obtaining a first target abstract from abstracts of the candidate texts based on the first matching probability.
The second obtaining unit is used for obtaining the first candidate text based on the association relation between the abstract of the candidate text and the candidate text according to the first target abstract.
According to an embodiment of the present disclosure, the second query sub-module may include: the device comprises a first extraction unit, a second matching unit, a third obtaining unit and a fourth obtaining unit.
And the first extraction unit is used for extracting a plurality of semantic features of a plurality of subject words.
And the second matching unit is used for matching the semantic features of the subject words with the semantic features of the abstracts of the candidate texts of the second index knowledge base to obtain a second matching probability.
And a third obtaining unit, configured to obtain a second target digest from digests of the plurality of candidate texts based on the second matching probability.
And a fourth obtaining unit, configured to obtain, according to the second target abstract, a second candidate text based on an association relationship between the abstract of the candidate text and the candidate text.
According to an embodiment of the present disclosure, the first obtaining sub-module may include: a determination unit, a fifth obtaining unit, and a sixth obtaining unit.
And the determining unit is used for determining a plurality of weights of the plurality of subject words.
And a fifth obtaining unit, configured to obtain the keyword importance level according to the keyword in the abstract of the first candidate text, the keyword in the abstract of the second candidate text, the keyword in the question text, and the multiple weights.
And a sixth obtaining unit, configured to sort the first candidate text and the second candidate text according to the keyword importance degree, so as to obtain a reference text.
According to an embodiment of the present disclosure, the first obtaining sub-module further includes: a third matching unit and a seventh obtaining unit. And the third matching unit is used for respectively matching the preset subject word with the abstract of the first candidate text and the abstract of the second candidate text to obtain a third matching probability. And a seventh obtaining unit, configured to sort the first candidate text and the second candidate text according to the keyword importance and the third matching probability, so as to obtain a reference text.
According to an embodiment of the present disclosure, the first obtaining sub-module further includes: a second extraction unit, a fourth matching unit, and an eighth obtaining unit. And the second extraction unit is used for extracting the first semantic features of the abstract of the first candidate text and the second semantic features of the plurality of subject words. And the fourth matching unit is used for matching the first semantic features with the second semantic features to obtain a fourth matching probability. And the eighth obtaining unit is used for sorting the first candidate text and the second candidate text according to the subject word importance degree, the third matching probability and the fourth matching probability to obtain the reference text.
According to an embodiment of the present disclosure, the above apparatus further includes: the device comprises a text segmentation module, a first extraction module and a first construction module. And the text segmentation module is used for segmenting the historical text according to paragraph features of the historical text published by the target object to obtain a plurality of text paragraphs. And the first extraction module is used for extracting the subject content of the text paragraphs to obtain a plurality of paragraph summaries corresponding to the text paragraphs. And the first construction module is used for constructing a target knowledge base according to the plurality of paragraph summaries and the plurality of text paragraphs.
According to an embodiment of the present disclosure, the above apparatus further includes: the device comprises a second extraction module, a second generation module and a second construction module. And the second extraction module is used for extracting the subject content of the historical text to obtain a text abstract corresponding to the historical text. And the second generation module is used for generating the problem text corresponding to the text abstract according to the text abstract. And the second construction module is used for constructing a target knowledge base according to the problem text and the text abstract corresponding to the text abstract.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
According to an embodiment of the present disclosure, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to an embodiment of the present disclosure, a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform a method as above.
According to an embodiment of the present disclosure, a computer program product comprising a computer program which, when executed by a processor, implements a method as above.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, for example, a text generation method. For example, in some embodiments, the text generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When a computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the text generation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the text generation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (23)
1. A text generation method, comprising:
according to semantic relativity between adjacent words in the question text, word segmentation is carried out on the question text to obtain a plurality of subject words;
obtaining reference texts corresponding to the plurality of subject matters by querying a target knowledge base according to the plurality of subject matters, wherein the target knowledge base is constructed according to historical viewpoint contents published by a target object; and
And generating a reply text for replying to the question text according to the question text and the reference text.
2. The method of claim 1, wherein the target knowledge base comprises a first index knowledge base and a second index knowledge base; the obtaining, according to the plurality of subject terms, a reference text corresponding to the plurality of subject terms by querying a target knowledge base includes:
according to the plurality of subject words, a first candidate text is obtained by querying a first index knowledge base, wherein the first index knowledge base comprises index relations between abstracts of the candidate text and the candidate text;
obtaining a second candidate text by querying a second index knowledge base according to the plurality of subject terms, wherein the second index knowledge base comprises the index relation between semantic features of abstracts of the candidate text and the candidate text; and
the reference text is derived from the first candidate text and the second candidate text based on semantic relevance of the first candidate text, the second candidate text, and the plurality of subject words.
3. The method of claim 2, wherein the obtaining the first candidate text by querying a first index knowledge base according to the plurality of subject terms comprises:
Matching the plurality of subject words with abstracts of a plurality of candidate texts of the first index knowledge base to obtain a first matching probability;
obtaining a first target abstract from abstracts of the plurality of candidate texts based on the first matching probability; and
and obtaining the first candidate text based on the association relation between the abstract of the candidate text and the candidate text according to the first target abstract.
4. The method of claim 2, wherein the obtaining a second candidate text from the plurality of subject terms by querying a second index knowledge base comprises:
extracting a plurality of semantic features of the plurality of subject matters;
matching the semantic features of the subject terms with the semantic features of abstracts of the candidate texts of the second index knowledge base to obtain a second matching probability;
obtaining a second target abstract from the abstracts of the plurality of candidate texts based on the second matching probability; and
and obtaining the second candidate text based on the association relation between the abstract of the candidate text and the candidate text according to the second target abstract.
5. The method of claim 2, wherein the deriving the reference text from the first candidate text and the second candidate text based on semantic relevance of the first candidate text, the second candidate text, and the plurality of subject words comprises:
Determining a plurality of weights for the plurality of subject matters;
obtaining the importance of the subject word according to the subject word in the abstract of the first candidate text, the subject word in the abstract of the second candidate text, the subject word in the question text and the weights; and
and sorting the first candidate text and the second candidate text according to the subject term importance degree to obtain the reference text.
6. The method of claim 5, further comprising:
matching the preset subject word with the abstract of the first candidate text and the abstract of the second candidate text respectively to obtain a third matching probability; and
and sorting the first candidate text and the second candidate text according to the subject term importance and the third matching probability to obtain the reference text.
7. The method of claim 6, further comprising:
extracting first semantic features of the abstract of the first candidate text and second semantic features of the plurality of subject matters;
matching the first semantic features with the second semantic features to obtain a fourth matching probability; and
and sorting the first candidate text and the second candidate text according to the subject word importance degree, the third matching probability and the fourth matching probability to obtain the reference text.
8. The method of claim 1, further comprising:
acquiring context information of the problem text; and
and according to the context information of the question text, carrying out information supplementation on the question text to generate the question text with complete semantics.
9. The method of claim 1, further comprising:
according to paragraph features of the history text published by the target object, segmenting the history text to obtain a plurality of text paragraphs;
extracting the subject content of the text paragraphs to obtain paragraph summaries corresponding to the text paragraphs; and
and constructing the target knowledge base according to the plurality of paragraph summaries and the plurality of text paragraphs.
10. The method of claim 9, further comprising:
extracting the subject content of the historical text to obtain a text abstract corresponding to the historical text;
generating a question text corresponding to the text abstract according to the text abstract; and
and constructing the target knowledge base according to the problem text corresponding to the text abstract and the text abstract.
11. A text generation apparatus comprising:
the word segmentation module is used for segmenting the problem text according to semantic relativity between adjacent words in the problem text to obtain a plurality of subject words;
The query module is used for obtaining reference texts corresponding to the plurality of subject matters by querying a target knowledge base according to the plurality of subject matters, wherein the target knowledge base is constructed according to historical viewpoint contents published by a target object; and
the first generation module is used for generating a reply text for replying to the question text according to the question text and the reference text.
12. The apparatus of claim 11, wherein the target knowledge base comprises a first index knowledge base and a second index knowledge base; the query module comprises:
the first query sub-module is used for obtaining a first candidate text by querying a first index knowledge base according to the plurality of subject words, wherein the first index knowledge base comprises the index relation between the abstract of the candidate text and the candidate text;
the second query sub-module is used for obtaining a second candidate text by querying a second index knowledge base according to the plurality of subject terms, wherein the second index knowledge base comprises the index relation between semantic features of abstracts of the candidate text and the candidate text; and
and the first obtaining sub-module is used for obtaining the reference text from the first candidate text and the second candidate text based on the semantic relativity of the first candidate text, the second candidate text and the plurality of subject words.
13. The apparatus of claim 12, wherein the first query submodule comprises:
the first matching unit is used for matching the plurality of subject words with abstracts of the plurality of candidate texts of the first index knowledge base to obtain a first matching probability;
a first obtaining unit, configured to obtain a first target digest from digests of the plurality of candidate texts based on the first matching probability; and
the second obtaining unit is used for obtaining the first candidate text based on the association relation between the abstract of the candidate text and the candidate text according to the first target abstract.
14. The apparatus of claim 12, wherein the second query submodule comprises:
a first extraction unit configured to extract a plurality of semantic features of the plurality of subject words;
the second matching unit is used for matching the semantic features of the plurality of subject words with the semantic features of the abstracts of the plurality of candidate texts of the second index knowledge base to obtain a second matching probability;
a third obtaining unit, configured to obtain a second target digest from digests of the plurality of candidate texts based on the second matching probability; and
and a fourth obtaining unit, configured to obtain, according to the second target abstract, the second candidate text based on an association relationship between the abstract of the candidate text and the candidate text.
15. The apparatus of claim 12, wherein the first obtaining submodule comprises:
a determining unit configured to determine a plurality of weights of the plurality of subject words;
a fifth obtaining unit, configured to obtain a keyword importance according to the keyword in the abstract of the first candidate text, the keyword in the abstract of the second candidate text, the keyword in the question text, and the plurality of weights; and
and a sixth obtaining unit, configured to sort the first candidate text and the second candidate text according to the keyword importance degree, so as to obtain the reference text.
16. The apparatus of claim 15, further comprising:
the third matching unit is used for matching the preset subject word with the abstract of the first candidate text and the abstract of the second candidate text respectively to obtain a third matching probability; and
and a seventh obtaining unit, configured to sort the first candidate text and the second candidate text according to the subject term importance and the third matching probability, so as to obtain the reference text.
17. The apparatus of claim 16, further comprising:
a second extracting unit, configured to extract first semantic features of the abstracts of the first candidate text and second semantic features of the plurality of subject words;
The fourth matching unit is used for matching the first semantic features with the second semantic features to obtain a fourth matching probability; and
and an eighth obtaining unit, configured to sort the first candidate text and the second candidate text according to the keyword importance degree, the third matching probability and the fourth matching probability, so as to obtain the reference text.
18. The apparatus of claim 11, further comprising:
the acquisition module is used for acquiring the context information of the problem text; and
and the information supplementing module is used for supplementing information to the problem text according to the context information of the problem text to generate the problem text with complete semantics.
19. The apparatus of claim 11, further comprising:
the text segmentation module is used for segmenting the historical text according to paragraph features of the historical text published by the target object to obtain a plurality of text paragraphs;
the first extraction module is used for extracting the subject content of the text paragraphs to obtain a plurality of paragraph summaries corresponding to the text paragraphs; and
and the first construction module is used for constructing the target knowledge base according to the plurality of paragraph summaries and the plurality of text paragraphs.
20. The apparatus of claim 11, further comprising:
the second extraction module is used for extracting the subject content of the historical text to obtain a text abstract corresponding to the historical text;
the second generation module is used for generating a question text corresponding to the text abstract according to the text abstract; and
and the second construction module is used for constructing the target knowledge base according to the text of the problem corresponding to the text abstract and the text abstract.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310876300.5A CN116882372A (en) | 2023-07-17 | 2023-07-17 | Text generation method, device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310876300.5A CN116882372A (en) | 2023-07-17 | 2023-07-17 | Text generation method, device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116882372A true CN116882372A (en) | 2023-10-13 |
Family
ID=88264055
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310876300.5A Pending CN116882372A (en) | 2023-07-17 | 2023-07-17 | Text generation method, device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116882372A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117273868A (en) * | 2023-11-20 | 2023-12-22 | 浙江口碑网络技术有限公司 | Shop recommendation method and device, electronic equipment and storage medium |
CN117556061A (en) * | 2023-11-20 | 2024-02-13 | 曾昭涵 | Text output method and device, electronic equipment and storage medium |
CN117609444A (en) * | 2023-11-08 | 2024-02-27 | 天讯瑞达通信技术有限公司 | Searching question-answering method based on large model |
CN117743539A (en) * | 2023-12-20 | 2024-03-22 | 北京百度网讯科技有限公司 | Text generation method and device based on large language model |
CN117763114A (en) * | 2023-12-25 | 2024-03-26 | 北京智谱华章科技有限公司 | Method and equipment for generating medical question-answer text based on retrieval enhancement generation |
CN117992569A (en) * | 2024-01-26 | 2024-05-07 | 百度时代网络技术(北京)有限公司 | Method, device, equipment and medium for generating document based on generation type large model |
-
2023
- 2023-07-17 CN CN202310876300.5A patent/CN116882372A/en active Pending
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117609444A (en) * | 2023-11-08 | 2024-02-27 | 天讯瑞达通信技术有限公司 | Searching question-answering method based on large model |
CN117609444B (en) * | 2023-11-08 | 2024-06-25 | 天讯瑞达通信技术有限公司 | Searching question-answering method based on large model |
CN117273868A (en) * | 2023-11-20 | 2023-12-22 | 浙江口碑网络技术有限公司 | Shop recommendation method and device, electronic equipment and storage medium |
CN117556061A (en) * | 2023-11-20 | 2024-02-13 | 曾昭涵 | Text output method and device, electronic equipment and storage medium |
CN117556061B (en) * | 2023-11-20 | 2024-05-24 | 曾昭涵 | Text output method and device, electronic equipment and storage medium |
CN117743539A (en) * | 2023-12-20 | 2024-03-22 | 北京百度网讯科技有限公司 | Text generation method and device based on large language model |
CN117763114A (en) * | 2023-12-25 | 2024-03-26 | 北京智谱华章科技有限公司 | Method and equipment for generating medical question-answer text based on retrieval enhancement generation |
CN117992569A (en) * | 2024-01-26 | 2024-05-07 | 百度时代网络技术(北京)有限公司 | Method, device, equipment and medium for generating document based on generation type large model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11227118B2 (en) | Methods, devices, and systems for constructing intelligent knowledge base | |
CN116882372A (en) | Text generation method, device, electronic equipment and storage medium | |
CN107193792B (en) | Method and device for generating article based on artificial intelligence | |
CN106960030B (en) | Information pushing method and device based on artificial intelligence | |
CN111428010B (en) | Man-machine intelligent question-answering method and device | |
CN110162768B (en) | Method and device for acquiring entity relationship, computer readable medium and electronic equipment | |
CN114610845B (en) | Intelligent question-answering method, device and equipment based on multiple systems | |
CN114116997A (en) | Knowledge question answering method, knowledge question answering device, electronic equipment and storage medium | |
CN113407677B (en) | Method, apparatus, device and storage medium for evaluating consultation dialogue quality | |
CN112699645A (en) | Corpus labeling method, apparatus and device | |
CN114003682A (en) | Text classification method, device, equipment and storage medium | |
CN117112595A (en) | Information query method and device, electronic equipment and storage medium | |
CN111444321B (en) | Question answering method, device, electronic equipment and storage medium | |
CN114298007A (en) | Text similarity determination method, device, equipment and medium | |
CN109145261B (en) | Method and device for generating label | |
CN117609418A (en) | Document processing method, device, electronic equipment and storage medium | |
CN116186220A (en) | Information retrieval method, question and answer processing method, information retrieval device and system | |
CN111368036B (en) | Method and device for searching information | |
CN110502630B (en) | Information processing method and device | |
CN112905752A (en) | Intelligent interaction method, device, equipment and storage medium | |
CN116992111B (en) | Data processing method, device, electronic equipment and computer storage medium | |
CN112288548B (en) | Method, device, medium and electronic equipment for extracting key information of target object | |
CN113505889B (en) | Processing method and device of mapping knowledge base, computer equipment and storage medium | |
CN118760759A (en) | Document-oriented question-answering method and device, electronic equipment, storage medium and product | |
CN118690009A (en) | Recall method of search result and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |