WO2018105373A1 - Information processing device, information processing method, and information processing system - Google Patents
Information processing device, information processing method, and information processing system Download PDFInfo
- Publication number
- WO2018105373A1 WO2018105373A1 PCT/JP2017/041758 JP2017041758W WO2018105373A1 WO 2018105373 A1 WO2018105373 A1 WO 2018105373A1 JP 2017041758 W JP2017041758 W JP 2017041758W WO 2018105373 A1 WO2018105373 A1 WO 2018105373A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user
- display
- information
- text
- text information
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 84
- 238000003672 processing method Methods 0.000 title claims abstract description 9
- 238000012545 processing Methods 0.000 claims description 31
- 238000004458 analytical method Methods 0.000 claims description 11
- 238000001514 detection method Methods 0.000 claims description 10
- 230000009471 action Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 29
- 206010048865 Hypoacusis Diseases 0.000 abstract 2
- 238000000034 method Methods 0.000 description 57
- 230000008569 process Effects 0.000 description 46
- 230000001629 suppression Effects 0.000 description 18
- 239000002245 particle Substances 0.000 description 13
- 238000003384 imaging method Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 230000000306 recurrent effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000003128 head Anatomy 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 208000019901 Anxiety disease Diseases 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000036506 anxiety Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000010079 rubber tapping Methods 0.000 description 2
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000000945 filler Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Definitions
- the present technology relates to an information processing apparatus, an information processing method, and an information processing system, and more particularly, to an information processing apparatus, an information processing method, and an information processing system that can support natural conversation using voice recognition.
- This technology has been made in view of these circumstances, and is a technology that supports natural conversation using voice recognition.
- the information processing apparatus includes the voice acquisition unit that acquires the voice information of the first user input to the voice input device, and the acquired voice in the display device for the second user.
- a display control unit that controls display of text information corresponding to the information, wherein the display control unit displays the text information on the display device or the input amount of the voice information input from the voice input device Based on at least one of the above, control is performed regarding the display amount of the text information.
- An information processing method includes: an audio acquisition step of acquiring audio information of a first user input to an audio input device by the information processing device; A display control step for controlling display of text information corresponding to the acquired voice information in a display device for two users, wherein the display control step includes a display amount of the text information on the display device, Alternatively, control regarding the display amount of the text information is performed based on at least one of the input amounts of the speech information input from the speech input device.
- a program that is one aspect of the present technology includes a voice input device that acquires voice information of a first user, a display control device that controls display of text information corresponding to the acquired voice information, and the display control device A display device that displays the text information for the second user in accordance with the control from the display device, the display control device being input from the display amount of the text information on the display device or the voice input device Control relating to the display amount of the text information is performed based on at least one of the input amounts of the voice information.
- the input voice information of the first user is acquired, and display of text information corresponding to the acquired voice information on the display device for the second user is controlled.
- control relating to the display amount of the text information is performed based on at least one of the display amount of the text information on the display device or the input amount of the speech information input from the speech input device.
- natural conversation using voice recognition can be performed.
- FIG. 1 shows a first configuration example of a conversation support apparatus according to an embodiment of the present technology, and shows a case where the conversation support apparatus 10 is formed as one casing.
- the conversation support device 10 is for supporting a conversation between a person who does not have anxiety about hearing (hereinafter referred to as user A) and a person who has anxiety about hearing (hereinafter referred to as user B). .
- user A a person who does not have anxiety about hearing
- user B a person who has anxiety about hearing
- the first user in one aspect of the present technology corresponds to the user A in the present configuration example
- the second user in one aspect of the present technology corresponds to the user 2 in the present configuration example. May be.
- the 1st user in one side of this art should just be a user who inputs a voice. That is, the first user (user who inputs voice) is not limited to a single subject (user), and may be a plurality of subjects (users).
- the second user in one aspect of the present technology may be a user who visually recognizes the displayed utterance text, and is not limited to a single subject, and may be a plurality of subjects.
- the utterance of user A is converted into text (hereinafter referred to as utterance text) by voice recognition processing, and the utterance text is displayed on the display unit 43 for user B.
- utterance text text
- voice recognition processing voice recognition processing
- the user B can understand the utterance text (character information) corresponding to the utterance (voice information) of the user A.
- the utterance text displayed on the display unit 43 is displayed until the user B finishes reading or a predetermined time elapses.
- an image of the user B from an image captured by the imaging unit 41 or an utterance of the user B collected by the sound collecting unit 42 is used. Used.
- the display unit 43 for the user B is provided with a display unit 22 for the user A (FIG. 2) on the back side, and the display unit 22 has the same display as the display unit 43, that is, the user A
- the utterance text corresponding to the utterance is displayed. Thereby, the user A can confirm whether or not the user's utterance has been correctly recognized.
- FIG. 2 is a block diagram illustrating an internal configuration example of the conversation support apparatus according to the embodiment of the present technology.
- the conversation support device 10 includes a sound collection unit 21, a display unit 22, an operation input unit 23, an information processing unit 30, an imaging unit 41, a sound collection unit 42, a display unit 43, and an operation input unit 44.
- the sound collection unit 21, the display unit 22, and the operation input unit 23 are provided mainly for the user A.
- the sound collecting unit 21 collects the voice (utterance) spoken by the user A and supplies the corresponding speech signal to the information processing unit 30.
- the display unit 22 displays a screen corresponding to the image signal supplied from the information processing unit 30 (for example, an image signal for displaying an utterance text corresponding to the utterance of the user A on the screen).
- the operation input unit 23 receives various operations from the user A and notifies the information processing unit 30 of operation signals corresponding thereto.
- the information processing unit 30 converts the speech signal supplied from the sound collection unit 21 into speech text by speech recognition processing. Further, the information processing unit 30 supplies an image signal for displaying the utterance text on the screen to the display unit 43. Details of the information processing unit 30 will be described later.
- the imaging unit 41, the sound collection unit 42, the display unit 43, and the operation input unit 44 are provided mainly for the user B.
- the imaging unit 41 images the user B and supplies the moving image signal obtained as a result to the information processing unit 30.
- the sound collecting unit 42 collects the voice (speech) spoken by the user B and supplies the corresponding speech signal to the information processing unit 30.
- the display unit 43 displays a screen corresponding to an image signal supplied from the information processing unit 30 for displaying the utterance text corresponding to the utterance of the user A on the screen.
- the operation input unit 44 receives various operations from the user B and notifies the information processing unit 30 of operation signals corresponding thereto.
- FIG. 3 shows a configuration example of functional blocks included in the information processing unit 30.
- the information processing unit 30 includes a voice recognition unit 31, an image recognition unit 32, a misrecognition learning unit 33, an analysis unit 35, an editing unit 36, an additional writing learning unit 37, a display waiting list holding unit 38, a display control unit 39, and a feedback unit. 40.
- the speech recognition unit 31 generates an utterance text by converting an utterance signal corresponding to the utterance of the user A supplied from the sound collection unit 21 into an utterance text by speech recognition processing, and supplies the utterance text to the analysis unit 35.
- the speech recognition unit 31 converts an utterance signal corresponding to the utterance of the user B supplied from the sound collection unit 42 into an utterance text by voice recognition processing, and the utterance text represents a specific keyword indicating the user B's already read. (For example, “Yes”, “Yes”, “Okay”, “OK”, “Next”, etc. registered in advance) are detected, and the detection result is supplied to the display control unit 39.
- the image recognition unit 32 Based on the moving image signal supplied from the imaging unit 41, the image recognition unit 32 performs a specific operation indicating the user B's reading (for example, nodding, watching the screen and then looking in a direction other than the screen). It detects and supplies a detection result to the display control part 39. FIG. Further, the image recognition unit 32 measures the distance between the user B and the display unit 43 based on the moving image signal supplied from the imaging unit 41 and notifies the display control unit 39 of the measurement result. The distance between the user B and the display unit 43 is used to set the character size of the utterance text displayed on the display unit 43. For example, the longer the distance between the user B and the display unit 43, the larger the character size is set.
- the line-of-sight direction may be determined based on the direction of the wearable device, that is, the direction of the user B's head or body.
- the direction of the wearable device can be determined based on position information acquired from a camera, an acceleration sensor, a gyro sensor, or the like provided in the wearable device.
- the Purkinje image of the eyeball of the user B and the pupil center may be determined using an infrared camera and an infrared LED, and the line-of-sight direction of the user B may be determined based on these.
- the misrecognition learning unit 33 edits input from the user A or the user B to the utterance text corresponding to the utterance of the user A that is the result of the speech recognition process (for example, an erase instruction operation, a recurrent speech instruction operation, NG Corresponding to the word registration instruction operation), misrecognized words included in the utterance text are registered in the misrecognition list 34.
- the misrecognition learning unit 33 A recognition result (second candidate or the like) other than a misrecognized word (first candidate of recognition result) is requested.
- the analysis unit 35 analyzes the speech text corresponding to the speech of the user A generated by the speech recognition unit 31, for example, by decomposing the speech text into parts of speech or extracting keywords.
- the editing unit 36 controls the amount of text that specifies particles or the like that do not impair the meaning of the utterance text even if line breaks or page breaks are added or deleted as appropriate to the utterance text.
- An edit process such as a process is performed and supplied to the display wait list holding unit 38. In the editing process, it may be considered that at least one of line feed, page break or text amount suppression processing is performed, and at least one of line feed, page break or text amount suppression processing may be omitted.
- the editing unit 36 can supply a plurality of related utterance texts to a display waiting list holding unit 38 in a thread.
- an icon corresponding to a thread waiting for display may be displayed while displaying the current thread.
- Display objects indicating threads waiting to be displayed are not limited to icons, and may be appropriately set. According to such a configuration, it is possible to easily grasp how much the user B has finished reading the other party's utterance text. Moreover, according to such a structure, the user B can act to suppress the input amount of the user A based on the progress of the utterance text.
- the editing unit 36 outputs a sentence of the utterance text based on the editing operation input by the user A using the operation input unit 23 with respect to the utterance text corresponding to the utterance of the user A displayed on the display unit 22. Controls the process of deleting, inserting utterance text corresponding to recurrent utterances, and registering NG words.
- the editing unit 36 performs an additional operation (specifically, “?”) That is input by the user A using the operation input unit 23 to the utterance text corresponding to the utterance of the user A displayed on the display unit 22.
- the operation of adding a symbol such as “?” To the utterance text is controlled based on the operation of adding a symbol such as “(question mark)”. Note that symbols, pictograms, emoticons, etc. other than “?” May be additionally recorded.
- the editing unit 36 is based on an editing operation or an additional writing operation input by the user B using the operation input unit 44 with respect to the utterance text corresponding to the utterance of the user A displayed on the display unit 43. Editing processing can be performed. In other words, both the user A and the user B can perform an editing operation and an additional writing operation on the displayed utterance text corresponding to the utterance of the user A.
- the additional writing learning unit 37 learns the additional writing operation input by the user A or the user B, and based on the learning result, even if there is no additional writing operation from the user A or the user B, the same symbol or the like is given to the same utterance text.
- the editing unit 36 is controlled so as to be additionally written.
- the display waiting list holding unit 38 displays the edited utterance text including at least one of line feed, page break, and text amount suppression processing (the text amount suppression processing may not be performed depending on the number of characters).
- the information is registered in the display wait list in the sequence order, that is, in the order in which the user A speaks.
- the utterance text registered in the display waiting list is read from the display control unit 39, it is deleted from the display waiting list.
- the display control unit 39 reads out the utterance texts from the display waiting list in chronological order, generates an image signal for displaying the read out utterance text on the screen, and supplies it to the display unit 22 and the display unit 43. Further, the display control unit 39 displays a display amount of the utterance text currently displayed on the display unit 22 and the display unit 43, a detection result of a specific keyword that is supplied from the voice recognition unit 31 and represents the user B already read, The display amount of the utterance text on the display unit 22 and the display unit 43 is controlled based on the detection result of the specific action representing the read of the user B supplied from the image recognition unit 32. Further, the display control unit 39 sets the character size for displaying the utterance text according to the distance between the user B and the display unit 43.
- the feedback control unit 40 is registered in the utterance speed of the user A, the length of the utterance of the user A, the amount of speech recognition characters per unit time, the amount of utterance text displayed on the display unit 43, and the display waiting list. Corresponding to the amount of utterance text, whether or not the user B has already read, the reading speed of the user B, etc., the utterance speed is increased (or decreased) for the user A who is the utterer by using character display or voice output. To control the feedback to notify the user, to notify the utterance, or to prompt the next utterance.
- the feedback control unit 40 corresponds to the amount of utterance text displayed on the display unit 43, the amount of utterance text registered in the display waiting list, whether or not the user B has been read, the reading speed of the user B, etc. Then, feedback that prompts the user B to read the utterance text is controlled by using a character display or the like.
- the above-described functional blocks included in the information processing unit 30 do not have to be housed in the same casing, and may be arranged in a distributed manner. Some or all of these functional blocks may be arranged on a server on the Internet, a so-called cloud network.
- FIG. 4 illustrates a second configuration example of the conversation support apparatus according to the embodiment of the present technology.
- the conversation support device 10 is configured as a system including a plurality of different electronic devices.
- the connection between the plurality of electronic devices constituting the conversation support apparatus 10 may be wired connection or may use predetermined wireless communication (for example, Bluetooth (registered trademark), Wi-Fi (trademark), etc.). Good.
- the conversation support device 10 includes a smartphone 50 used by the user A and a tablet PC (hereinafter referred to as a tablet) 60 used by the user B.
- FIG. 5 shows a state in which the constituent elements of the conversation support apparatus 10 shown in FIG. 2 are divided into the smartphone 50 and the tablet PC 60.
- the smartphone 50 among the components of the conversation support device 10, the sound collection unit 21, the display unit 22, the operation input unit 23, and the information processing unit 30 are realized by the smartphone 50.
- a microphone, a display, a touch panel, and the like included in the smartphone 50 correspond to the sound collection unit 21 and the operation input unit 23, respectively.
- An application program executed by the smartphone 50 corresponds to the information processing unit 30.
- the imaging unit 41, the sound collection unit 42, the display unit 43, and the operation input unit 44 are realized by the tablet 60.
- the camera, microphone, display, touch panel, and the like included in the tablet 60 correspond to the imaging unit 41, the sound collection unit 42, the display unit 43, and the operation input unit 44, respectively.
- the speech recognition unit 31 among the functional blocks of the information processing unit 30 is arranged in a server 72 that can be connected via the Internet 71.
- FIG. 6 illustrates a third configuration example of the conversation support apparatus according to the embodiment of the present technology.
- the conversation support device 10 is configured as a system including a plurality of electronic devices.
- the projector 80 that projects the video for displaying the utterance text on the smartphone 50 used by the user A and the position where the user B lying on the bed can see, for example, the wall or ceiling of the room. And a camera 110 arranged on the ceiling or the like.
- FIG. 7 shows a state in which the constituent elements of the conversation support apparatus 10 shown in FIG. 2 are divided into a smartphone 50, a projector 80, and a camera 110.
- the sound collection unit 21, the display unit 22, the operation input unit 23, and the information processing unit 30 are realized by the smartphone 50.
- the imaging unit 41 and the sound collection unit 42 are realized by the camera 110.
- the image sensor and the microphone included in the camera 110 correspond to the imaging unit 41 and the sound collecting unit 42, respectively.
- the display unit 43 and the operation input unit 44 are realized by the projector 80.
- the projection unit and the remote controller included in the projector 80 correspond to the display unit 43 and the operation input unit 44, respectively.
- the voice recognition unit 31 among the functional blocks included in the information processing unit 30 is arranged in a server 72 that can be connected via the Internet 71.
- FIG. 8 illustrates a fourth configuration example of the conversation support apparatus according to the embodiment of the present technology.
- the conversation support apparatus 10 is configured as a system including a plurality of different electronic devices.
- the fourth configuration example includes a neck microphone 100 used by the user A, a television receiver (hereinafter referred to as TV) 90 disposed at a position where the user A and the user B can see,
- the camera 110 is mounted on the TV 90.
- FIG. 9 shows a state in which the components of the conversation support device 10 shown in FIG. 2 are divided into a neck microphone 100, a TV 90, and a camera 110.
- the sound collection unit 21 is realized by the neck microphone 100.
- the neck microphone 100 may be provided with a speaker that outputs sound in addition to the sound collecting unit 21.
- the imaging unit 41 and the sound collection unit 42 are realized by the camera 110.
- the display unit 43 and the operation input unit 44 are realized by the TV 90.
- the display and remote controller included in the TV 90 correspond to the display unit 43 and the operation input unit 44, respectively. It is assumed that the display and the remote controller included in the TV 90 also serve as the display unit 22 and the operation input unit 23 for the user A.
- the voice recognition unit 31 among the functional blocks of the information processing unit 30 is arranged in a server 72 that can be connected via the Internet 71.
- the conversation support device 10 can be configured as one electronic device, or can be configured as a system in which a plurality of electronic devices are combined.
- the first to fourth configuration examples described above can be combined as appropriate.
- a wearable device such as a clock-type terminal or a head-mounted display, a monitor for a PC (personal computer), or the like can be employed in addition to the above-described example.
- FIG. 10 is a flowchart for explaining display wait list generation processing by the conversation support apparatus 10. This display waiting list generation process is repeatedly executed until the power is turned off after the conversation support device 10 is activated.
- step S1 when the user A speaks, the sound is acquired by the sound collecting unit 21.
- the sound collecting unit 21 converts the voice of the user A into an utterance signal and supplies it to the information processing unit 30.
- step S ⁇ b> 2 in the information processing unit 30, the speech recognition unit 31 converts the speech signal corresponding to the speech of the user A into speech text by performing speech recognition processing.
- step S3 the analysis unit 35 analyzes the utterance text corresponding to the utterance of the user A generated by the voice recognition unit 31.
- step S4 the editing unit 36 performs an editing process including at least one of a line feed, a page break, or a text amount suppression process on the utterance text corresponding to the utterance of the user A based on the analysis result, The processed utterance text is supplied to the display waiting list holding unit 38.
- step S5 the display waiting list holding unit 38 holds the edited utterance texts supplied from the editing unit 36 in chronological order. Thereafter, the process returns to step S1, and the subsequent steps are repeated.
- FIG. 11 is a flowchart for explaining utterance text display processing by the conversation support apparatus 10.
- the utterance text display process is repeatedly executed in parallel with the display wait list generation process described above until the power is turned off after the conversation support device 10 is activated.
- step S11 the display control unit 39 determines whether or not the utterance text is currently displayed on the screens of the display units 22 and 43. If it is determined that it is displayed, the process proceeds to step S12. In step S12, the display control unit 39 determines whether or not a predetermined shortest display time has elapsed since the display of the currently displayed utterance text has started, and the shortest display time has elapsed. Wait until. If the shortest display time has elapsed, the process proceeds to step S13.
- step S ⁇ b> 13 the display control unit 39 represents the detection result of the specific keyword representing the user B's read, which is supplied from the voice recognition unit 31, and the user B's read that is supplied from the image recognition unit 32. Based on the detection result of the specific action, it is determined whether or not the reading of the user B with respect to the displayed utterance text has been detected.
- FIG. 12 shows an example of determination of the read detection of the user B in step S13.
- the user B can detect when a specific operation representing a read such as a nod is detected from the image recognition result of a moving image obtained by capturing the user B, the user B can detect when a specific operation indicating the read is detected a predetermined number of times (for example, twice). It is estimated that the user B has been read, and it is determined that the user B has been read.
- the conversation is progressing between the user A and the user B when the user A can be detected. It is estimated that B understands, and it is determined that the user B's read has been detected.
- the read determination of the user B is not limited to the above-described example.
- the user may arbitrarily add a specific keyword indicating read or a specific operation indicating read.
- step S14 the display control unit 39 determines whether or not a predetermined longest display time has elapsed since the display of the currently displayed utterance text has started, and the longest display time has elapsed. The process returns to step S13 until steps S13 and S14 are repeated. Then, when the user B has been read or the longest display time has elapsed, the process proceeds to step S15.
- step S ⁇ b> 15 the display control unit 39 reads the utterance texts from the display waiting list in time series order, generates an image signal for displaying the read utterance texts on the screen, and supplies the image signals to the display unit 22 and the display unit 43. .
- the display control unit 39 reads the utterance texts from the display waiting list in time series order, generates an image signal for displaying the read utterance texts on the screen, and supplies the image signals to the display unit 22 and the display unit 43. .
- the screens of the display unit 22 and the display unit 43 are already full of utterance text, the screen is scrolled, the utterance text that was displayed first disappears from the screen, and is newly read from the display wait list.
- the issued utterance text is displayed on the screen.
- step S11 If it is determined in step S11 that the utterance text is not currently displayed on the screens of the display units 22 and 43, steps S12 to S14 are skipped, and the process proceeds to step S15.
- the display waiting list generation process and the utterance text display process are executed in parallel, so that the user A's utterance is presented to the user B as the utterance text, The display of the utterance text is advanced.
- FIG. 13 shows a situation where, for example, a user A who is an elementary school student and a user B who is a mother have a conversation using the conversation support device 10.
- the user A uttered without saying at a stretch that he said, “If you went to school yesterday, you were told to collect 10000 yen because you collected money for school trips.”
- FIG. 14 shows a display example of the display unit 43 in the situation shown in FIG. 14A shows a state in which the editing process is not reflected
- FIG. 14B shows a state in which line breaks and page breaks are reflected in the editing process
- FIG. 14C shows a line break and page break. The page and the text amount suppression processing are all reflected.
- the display unit 43 initially displays the utterance text that does not reflect the editing process as shown in A of FIG. 14. Is done. In this state, line breaks and page breaks are made regardless of the meaning and context, so it is difficult to read, and the numerical value (10000 yen in the case of the figure) is divided in the middle, so the numerical value can be misunderstood. There is sex.
- line breaks and page breaks in the editing process are reflected, which are shown in FIG. 14B.
- line breaks and page breaks are performed according to the meaning and context of the utterance text, making it easier to read and the effect of suppressing misunderstandings such as numerical values.
- the displayed utterance text may be deleted from the screen.
- the display of A of FIG. 14 may be returned.
- the display may return to the display shown in FIG. 14B.
- the display of FIG. 14B is displayed, and when the user B performs the first operation, the display of FIG.
- the displayed utterance text may be erased from the screen. Thereafter, each time the user B performs the first operation again, the display may return to the display of C in FIG. 14, B in FIG. 14, or A in FIG. 14.
- the editing process is reflected in the displayed utterance text in response to the operation by the user B.
- the editing process is performed on the displayed utterance text in accordance with the operation by the user A. It is also possible to reflect.
- at least one of the first operation, the second operation, or the third operation may be regarded as a predetermined operation in one aspect of the present technology.
- FIG. 15 shows a situation where user A and user B have a conversation using the conversation support device 10. However, illustration of user B is omitted. In the case of the figure, it is assumed that the user A utters a relatively short sentence such as “Good morning”, “Tomorrow at 10 o'clock at the Shinagawa station”.
- FIG. 16 shows a display example on the display 43 of the utterance text corresponding to the utterance of the user A shown in FIG.
- the utterance text corresponding to the sentence is also displayed divided into short parts as shown in FIG.
- the utterance text other than “Good morning” is displayed in a state in which the text amount suppression process for deleting the nouns and the verbs and deleting the particles is reflected. That is, in the text amount suppression process of this example, parts of speech that are less important for understanding the meaning and context of the utterance text are omitted as appropriate.
- the wording to be omitted is not limited to the part of speech, and may be appropriately set by the user.
- the particles may be displayed less prominently than nouns or verbs related to the meaning or context of the utterance text.
- the utterance text may be displayed such that nouns, verbs, etc. stand out from particles, etc.
- FIG. 17 shows a display example in which the size of the particles such as particles is made smaller than the nouns and verbs related to the meaning and context of the utterance text so that the nouns and verbs stand out.
- the color of particles such as particles is light and the color of characters such as nouns and verbs is displayed darkly, the brightness of characters such as particles is low, and the brightness of characters such as nouns and verbs is reduced.
- the line may be displayed higher, or the line of a character such as a particle may be thinned, and the line of a character such as a noun or a verb may be displayed thickly.
- FIG. 18 shows a display example when the delete button 111 is provided corresponding to each utterance text displayed on the display unit 22 for the user A.
- Each utterance text shown in FIG. 18 corresponds to the utterance to user A shown in FIG.
- the utterance text can be deleted by operating the delete button 111.
- the word that should be recognized as “Shinagawa” is misrecognized as “Sanagawa”, so that when user A who has found this misrecognition operates the delete button 111, The utterance text including “Jonagawa” is deleted. Then, the misrecognition learning unit 33 learns that the utterance text including “Self” is erased (registered in the misrecognition list 34).
- the user A can delete the misrecognized utterance text or the utterance text corresponding to the wrong utterance.
- the delete button 111 can also be provided on the display section 43 for the user B.
- the user B can erase the utterance text that has been read, for example, by operating the erase button 111.
- the fact is notified to the user A side.
- the user A can confirm the read of the user B with respect to the erased utterance text.
- the fact may be notified to the user B side.
- This notification method may use screen display or audio output.
- FIG. 19 shows a display example when a re-utterance button 112 is provided corresponding to each utterance text displayed on the display unit 22 for the user A. Note that each utterance text shown in FIG. 19 corresponds to the utterance to user A shown in FIG.
- the user A when the user A finds misrecognition in the utterance text that is the voice recognition result of his / her utterance, the user can re-phrase (speak again) the utterance text by operating the recurrence button 112.
- the word that should be recognized as “Shinagawa” is misrecognized as “Sanakawa”, so that the user A who has found this misrecognition operates the recurrence button 112.
- the currently displayed “Gather to Jonagawa tomorrow at 10:00” will be the utterance text that is the speech recognition result of the recurrent speech (correctly recognized) In case “Tomorrow we will gather at Shinagawa at 10:00”). Further, it is learned by the misrecognition learning unit 33 that the utterance text including “Sanagawa” has been replaced (registered in the misrecognition list 34).
- the user A operates the recurrence button 112 to replace the misrecognized utterance text or the display of the utterance text corresponding to the wrong utterance with the utterance text corresponding to the recurrence at the position. Can do.
- the re-utterance button 112 can be provided on the display section 43 for the user B. In that case, in response to the user B operating the re-utterance button 112, the user A is notified so as to prompt the re-utterance.
- This notification method may use screen display or audio output.
- FIG. 20 shows a display example when an NG word registration button 113 is provided corresponding to each utterance text displayed on the display unit 22 for the user A.
- each utterance text shown by FIG. 20 respond
- NG word registration button 113 For example, if user A finds misrecognition in the utterance text that is the speech recognition result of his utterance and does not want the misrecognition result to appear again, it is registered as an NG word by operating the NG word registration button 113 can do.
- the user A can register a word that is erroneously recognized and is not desired to be displayed again as an NG word.
- the NG word registration button 113 can also be provided on the display section 43 for the user B. In that case, the user B can also register a word that he / she does not want to redisplay as an NG word by operating the NG word registration button 113.
- FIG. 21 shows a display example when the append button 114 is provided corresponding to each utterance text displayed on the display unit 22 for the user A.
- Each utterance text shown in FIG. 21 corresponds to the utterance to user A shown in FIG.
- the display example of FIG. 21 shows a result of the operation of the add button 114, and “?” Is added to the utterance text “Drinking medicine for lunch today” corresponding to the utterance of the user A. Yes. In this case, the fact that “?” Is added to “Today's lunch is already taken” is registered in the additional writing learning unit 37.
- the user A can add “?” To the utterance text by operating the add button 114.
- the append button 114 can be provided on the display unit 43 for the user B. In that case, when the user B does not understand the meaning of the displayed utterance text or wants to know more detailed contents, the user B selects a word or the like included in the displayed utterance text, and then appends it. By operating the button 114, the user A can be inquired about the meaning of a word or the like.
- the delete button 111, the recurrence button 112, the NG word registration button 113, and the postscript button 114 are displayed individually, but they may be displayed simultaneously.
- a predetermined touch operation for example, when the operation input unit 23 is a touch panel, a double operation is performed in response to an erase instruction, a recurrent speech instruction, an NG word registration, and an additional write instruction. Operation, long tap operation, flick operation, etc.
- a three-dimensional gesture operation performed by the user A or the user B may be assigned to the deletion instruction, the recurrent speech instruction, the NG word registration, and the additional recording instruction.
- the touch operation may be regarded as a two-dimensional gesture operation.
- the three-dimensional gesture operation may be performed using a controller included in the acceleration sensor or the gyro sensor, or may be performed using an image recognition result related to the user's operation.
- these touch operations and three-dimensional gesture operations may be simply referred to as “gesture operations”.
- a nodding operation of the user B an operation of shaking the head, and the like can be assigned as the gesture operation.
- the gaze detection function is employed in the wearable device, a physical action according to the movement of the gaze of the user B with respect to the displayed utterance text may be learned as a gesture operation. According to such a configuration, it is possible to improve the accuracy of the already-read determination according to the gesture operation.
- a predetermined magic word uttered by the user A or the user B may be assigned to the erasure instruction, the recurrent speech instruction, the NG word registration, and the additional writing instruction.
- the display of the utterance text corresponding to the utterance can be stopped. Good.
- the discontinuation of the display of the utterance text can include the discontinuation of the display of the text in the middle of the analysis, that is, the discontinuation of the display processing of the undisplayed text.
- the display of the utterance text is to be stopped, one sentence immediately before the erasure instruction is performed may be collectively erased by analyzing text information. As a result, it is possible to cancel text information (such as after words or fillers) that the user A has unintentionally input.
- the information processing unit 30 immediately follows the predetermined gesture or the predetermined magic word. The display of the voice input that is input to may be prohibited. Thereby, since the user A can arbitrarily select a state in which no utterance is transmitted, the display of an unintended utterance can be suppressed.
- FIG. 22 shows an example of a usage situation when the conversation support apparatus 10 can be used by three or more people.
- the conversation support device 10 is used to support the conversation between the users A1, A2 and A3 who are not concerned about hearing and the user B who is concerned about hearing.
- Each of the users A1 to A3 has a smartphone 50 for the user A, and the utterance texts corresponding to the utterances collected by the smartphones 50 existing in a predetermined distance range are grouped. Are collectively displayed on the display unit 43.
- each smartphone 50 outputs a predetermined sound wave to each other, and can be realized by collecting and analyzing the sound wave output by other than itself.
- the smartphone 50 may be detected from an image obtained by the camera 110 installed on the ceiling, and the position of each smartphone 50 may be specified.
- utterance texts corresponding to the utterances of the users A1 to A3 are displayed in time series, and the displayed utterance text is uttered by whom of the users A1 to A3.
- the speaker mark 121 representing the speaker is displayed in association with each utterance text so that the user B can determine whether it is present.
- FIG. 23 shows the direction in which the speaker is in the state where the user B looks at the display unit 43 as another method for indicating who the user A1 to A3 is speaking the displayed utterance text. Is displayed on the screen.
- the utterance direction instruction mark is displayed on the right side of the screen of the display unit 43. 131 is displayed.
- the relative directions of the users A1, A2, and A3 when the user B looks at the display unit 43 can be detected from an image obtained by the camera 110 installed on the ceiling, for example.
- FIG. 24 shows a situation in which the user A and the user B facing each other across the table are using the conversation support device 10.
- the projector 80 may collectively project the screen of the display unit 22 for the user A and the screen of the display unit 43 for the user B onto the table.
- the screen of the display unit 22 for the user A is displayed in a direction that the user A can easily read
- the screen of the display unit 43 for the user B is displayed in a direction that the user B can easily read.
- FIG. 25 shows an example of feedback to the user A who is the speaker among the users who are using the conversation support device 10.
- the feedback control unit 40 controls the user A who is the speaker, for example, “SlowSdown”, “The screen is full”. , “Please speak slowly”, “Please wait”, “Please divide once”, “Feed unread”, etc. feedback to inform you to slow down the speaking rate, text display and voice output using smartphone 50 etc. Is done by.
- an indicator corresponding to the utterance speed of the user A and the length of the utterance break may be displayed on the screen, or an alarm sound or the like may be output.
- the user A When the user A speaks at an optimum speed or segmentation for voice recognition or screen display, the user A is given points, and the user A performs some service according to the given points. Benefits and rankings may be obtained.
- the conversation support device 10 is used for the purpose of supporting the conversation between the user A who is not anxious about hearing and the user B who is uneasy about hearing.
- the present invention can be applied to applications that support conversations between people using different languages. In that case, a translation process may be performed after the voice recognition process.
- the conversation support device 10 may capture the mouth when the user A speaks as a moving image, display the utterance text, and display the moving image of the user A's mouth.
- the display of the utterance text and the motion of the moving image of the user A's mouth may be displayed in synchronization.
- the conversation support device 10 can be used for learning lip reading, for example.
- the conversation support device 10 may record the utterance of the user A and store the utterance text that is the voice recognition result in association with the utterance text so that the saved result can be reproduced and displayed later.
- the series of processes described above can be executed by hardware or can be executed by software.
- a program constituting the software is installed in the computer.
- the computer includes, for example, a general-purpose computer capable of executing various functions by installing a computer incorporated in dedicated hardware and various programs.
- the smartphone 50 in the second configuration example described above corresponds to the computer.
- FIG. 26 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- An input / output interface 205 is further connected to the bus 204.
- An input unit 206, an output unit 207, a storage unit 208, a communication unit 209, and a drive 210 are connected to the input / output interface 205.
- the input unit 206 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 207 includes a display, a speaker, and the like.
- the storage unit 208 includes a hard disk, a nonvolatile memory, and the like.
- the communication unit 209 includes a network interface and the like.
- the drive 210 drives a removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 201 loads the program stored in the storage unit 208 to the RAM 203 via the input / output interface 205 and the bus 204 and executes the program. A series of processing is performed.
- the program executed by the computer 200 can be provided by being recorded in, for example, a removable medium 211 such as a package medium.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 208 via the input / output interface 205 by attaching the removable medium 211 to the drive 210.
- the program can be received by the communication unit 209 via a wired or wireless transmission medium and installed in the storage unit 208.
- the program can be installed in the ROM 202 or the storage unit 208 in advance.
- the program executed by the computer 200 may be a program that is processed in time series in the order described in this specification, or a necessary timing such as when a call is made in parallel. It may be a program in which processing is performed.
- the present technology can also have the following configurations.
- a voice acquisition unit that acquires the voice information of the first user input to the voice input device;
- a display control unit for controlling display of text information corresponding to the acquired voice information in a display device for a second user, The display control unit controls the display amount of the text information based on at least one of the display amount of the text information on the display device or the input amount of the voice information input from the voice input device.
- Processing equipment (2) The information processing apparatus according to (1), wherein the display control unit suppresses a display amount of the text information when a display amount of the text information becomes a predetermined amount or more.
- the information processing apparatus according to (1) or (2), wherein the display control unit suppresses a display amount of the text information by suppressing a display amount of a predetermined part of speech included in the text information.
- the display control unit suppresses a display amount of the text information based on a predetermined operation by the first user or the second user. apparatus.
- the predetermined operation includes a first operation by the first user or the second user, The information processing apparatus according to (4), wherein the display control unit deletes the display of the text information based on the first operation after suppressing the display amount of the text information.
- the predetermined operation includes a second operation by the first user or the second user
- the information processing apparatus according to (5) wherein after the display of the text information is erased, the display control unit displays the text information erased in the display device again based on the second operation. .
- information indicating that an operation related to the text information has been performed is used as information indicating the first user or the second user.
- the information processing apparatus according to any one of (1) to (7), further including a notification unit that notifies the other of the information.
- the notification unit may notify the other of the first user or the second user, The information processing apparatus according to (8), which notifies that the display amount of the text information is suppressed.
- the notification unit may notify the other of the first user or the second user, The information processing apparatus according to (8) or (9), wherein the information indicating that the display of the text information has been deleted is notified.
- the notification unit When the second user performs an operation for requesting re-speech of the text information displayed on the display device, the notification unit performs a notification to prompt the re-speech to the first user. ) To (10). (12) When the second user performs an operation for requesting an inquiry about the text information displayed on the display device, the notification unit has received an inquiry about the text information from the first user.
- the information processing apparatus according to any one of (8) to (11).
- the display control unit suppresses a display amount of the text information on the display device based on a result of the second user's read detection based on at least one of the second user's utterance or action.
- the information processing apparatus according to any one of 1) to (12).
- the information processing apparatus according to any one of (1) to (13), wherein the display control unit stops displaying the text information on the display device based on at least one of the utterance or action of the first user. .
- a feedback control unit that controls notification of feedback information to at least one of the first user and the second user based on at least one of a display amount of the text information and an input amount of the audio information on the display device;
- the information processing apparatus according to any one of (1) to (14).
- the feedback information is information that prompts the first user to change at least one of an utterance speed and an utterance break.
- the information processing apparatus according to (15) or (16), wherein the feedback information is information that prompts the second user to read the text information displayed on the display device.
- a voice recognition unit that converts the voice information of the first user into the text information;
- the information processing apparatus according to any one of (1) to (17), wherein the voice recognition unit is provided inside the information processing apparatus or on a server connected via the Internet.
- a voice input device for acquiring voice information of the first user;
- a display control device for controlling display of text information corresponding to the acquired voice information;
- a display device for displaying the text information for a second user in accordance with control from the display control device;
- the display control device performs control related to a display amount of the text information based on at least one of a display amount of the text information on the display device or an input amount of the voice information input from the voice input device. Processing system.
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
図1は、本技術の実施の形態である会話支援装置の第1の構成例を示しており、会話支援装置10を1つの筐体として形成した場合を示している。 <First configuration example of conversation support device according to an embodiment of the present technology>
FIG. 1 shows a first configuration example of a conversation support apparatus according to an embodiment of the present technology, and shows a case where the
図2は、本技術の実施の形態である会話支援装置の内部の構成例を示すブロック図である。 <Configuration example of conversation support apparatus according to an embodiment of the present technology>
FIG. 2 is a block diagram illustrating an internal configuration example of the conversation support apparatus according to the embodiment of the present technology.
図3は、情報処理部30が有する機能ブロックの構成例を示している。 <Configuration example of functional block of
FIG. 3 shows a configuration example of functional blocks included in the
図4は、本技術の実施の形態である会話支援装置の第2の構成例を示している。該第2の構成例では、会話支援装置10が異なる複数の電子装置から成るシステムとして構成される。この場合、会話支援装置10を構成する複数の電子装置間の接続は、有線接続でもよいし、所定の無線通信(例えば、Bluetooth(登録商標)、Wi-Fi(商標)等)を用いてもよい。 <Second configuration example of conversation support device according to an embodiment of the present technology>
FIG. 4 illustrates a second configuration example of the conversation support apparatus according to the embodiment of the present technology. In the second configuration example, the
図6は、本技術の実施の形態である会話支援装置の第3の構成例を示している。該第3の構成例は、会話支援装置10が異なる複数の電子装置から成るシステムとして構成される。 <Third configuration example of conversation support device according to an embodiment of the present technology>
FIG. 6 illustrates a third configuration example of the conversation support apparatus according to the embodiment of the present technology. In the third configuration example, the
図8は、本技術の実施の形態である会話支援装置の第4の構成例を示している。該第4の構成例は、会話支援装置10が異なる複数の電子装置から成るシステムとして構成される。 <Fourth configuration example of the conversation support device according to the embodiment of the present technology>
FIG. 8 illustrates a fourth configuration example of the conversation support apparatus according to the embodiment of the present technology. In the fourth configuration example, the
次に、会話支援装置10の動作について説明する。 <Operation of
Next, the operation of the
次に、編集部36による改行、改頁、またはテキスト量抑制処理のうちの少なくとも一つを含む編集処理の具体例について説明する。 <Specific example of editing processing including at least one of line feed, page break, or text amount suppression processing>
Next, a specific example of editing processing including at least one of line feed, page break, or text amount suppression processing by the
次に、テキスト量抑制処理を含む編集処理の他の具体例について説明する。 <Other specific examples of editing processing including text amount suppression processing>
Next, another specific example of editing processing including text amount suppression processing will be described.
次に、画面上に表示された発話テキストに対するユーザによるボタン操作に対応した編集処理について説明する。 <Specific Example of Editing Process by
Next, the editing process corresponding to the button operation by the user for the utterance text displayed on the screen will be described.
次に、会話支援装置10の応用例について説明する。 <Application example of
Next, an application example of the
図25は、会話支援装置10を使用しているユーザのうち、発話者であるユーザAに対するフィードバックの一例を示している。 <Feedback for user A who is a speaker>
FIG. 25 shows an example of feedback to the user A who is the speaker among the users who are using the
本実施の形態においては、会話支援装置10を、聴力に不安を持たないユーザAと、聴力に不安を持つユーザBとの間の会話を支援する用途で用いるようにしたが、本技術は、例えば、使用する言語が異なる人どうしの会話を支援する用途に応用することができる。その場合、音声認識処理の後、翻訳処理を行うようにすればよい。 <Other application examples>
In the present embodiment, the
上述した一連の処理は、ハードウェアにより実行することもできるし、ソフトウェアにより実行することもできる。一連の処理をソフトウェアにより実行する場合には、そのソフトウェアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウェアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のコンピュータなどが含まれる。上記した第2の構成例におけるスマートフォン50は、該コンピュータに相当する。 <Another configuration example of the
The series of processes described above can be executed by hardware or can be executed by software. When a series of processing is executed by software, a program constituting the software is installed in the computer. Here, the computer includes, for example, a general-purpose computer capable of executing various functions by installing a computer incorporated in dedicated hardware and various programs. The
(1)
音声入力装置に入力された第1のユーザの音声情報を取得する音声取得部と、
第2のユーザのための表示装置における、取得された前記音声情報に対応するテキスト情報の表示を制御する表示制御部と
を備え、
前記表示制御部は、前記表示装置における前記テキスト情報の表示量、または前記音声入力装置から入力された前記音声情報の入力量の少なくとも一方に基づいて、前記テキスト情報の表示量に関する制御を行う
情報処理装置。
(2)
前記表示制御部は、前記テキスト情報の表示量が所定の量以上となった場合、前記テキスト情報の表示量を抑制する
前記(1)に記載の情報処理装置。
(3)
前記表示制御部は、前記テキスト情報に含まれる所定の品詞の表示量を抑制することによって、前記テキスト情報の表示量を抑制する
前記(1)または(2)に記載の情報処理装置。
(4)
前記表示制御部は、前記第1のユーザまたは前記第2のユーザによる所定の操作に基づいて、前記テキスト情報の表示量を抑制する
前記(1)から(3)のいずれかに記載の情報処理装置。
(5)
前記所定の操作は、前記第1のユーザまたは前記第2のユーザによる第1の操作を含み、
前記表示制御部は、前記テキスト情報の表示量を抑制した後、前記第1の操作に基づいて、前記テキスト情報の表示を消去させる
前記(4)に記載の情報処理装置。
(6)
前記所定の操作は、前記第1のユーザまたは前記第2のユーザによる第2の操作を含み、
前記表示制御部は、前記テキスト情報の表示を消去させた後、前記第2の操作に基づいて、前記表示装置において消去させた前記テキスト情報を再び表示させる
前記(5)に記載の情報処理装置。
(7)
前記表示制御部は、前記テキスト情報の解析結果に従い、前記テキスト情報の表示の改行または改頁の少なくとも一方を制御する
前記(1)から(6)のいずれかに記載の情報処理装置。
(8)
前記第1のユーザまたは前記第2のユーザの一方が前記テキスト情報に関する操作を行った場合、前記テキスト情報に関する操作が行われたことを示す情報を、前記第1のユーザまたは前記第2のユーザの他方に対して通知する通知部をさらに備える
前記(1)から(7)のいずれかに記載の情報処理装置。
(9)
前記通知部は、前記第1のユーザまたは前記第2のユーザの一方が、前記テキスト情報の表示量を抑制させる操作を行った場合、前記第1のユーザまたは前記第2のユーザの他方に、前記テキスト情報の表示量が抑制されたことを通知する
前記(8)に記載の情報処理装置。
(10)
前記通知部は、前記第1のユーザまたは前記第2のユーザの一方が、前記テキスト情報の表示を消去する操作を行った場合、前記第1のユーザまたは前記第2のユーザの他方に、前記テキスト情報の表示が消去されたことを通知する
前記(8)または(9)に記載の情報処理装置。
(11)
前記通知部は、前記第2のユーザが、前記表示装置に表示された前記テキスト情報の再発話を要求する操作を行った場合、前記第1のユーザに再発話を促す通知を行う
前記(8)から(10)のいずれかに記載の情報処理装置。
(12)
前記通知部は、前記第2のユーザが、前記表示装置に表示された前記テキスト情報に関する問い合わせを要求するための操作を行った場合、前記第1のユーザに前記テキスト情報に関する問い合わせがあったことを通知する
前記(8)から(11)のいずれかに記載の情報処理装置。
(13)
前記表示制御部は、前記第2のユーザの発声または動作の少なくとも一方に基づく前記第2のユーザの既読検知の結果に基づいて、前記表示装置における前記テキスト情報の表示量を抑制する
前記(1)から(12)のいずれかに記載の情報処理装置。
(14)
前記表示制御部は、前記第1のユーザの発声または動作の少なくとも一方に基づき、前記表示装置における前記テキスト情報の表示を中止する
前記(1)から(13)のいずれかに記載の情報処理装置。
(15)
前記表示装置における前記テキスト情報の表示量または前記音声情報の入力量の少なくとも一方に基づき、前記第1のユーザまたは前記第2のユーザの少なくとも一方に対するフィードバック情報の通知を制御するフィードバック制御部をさらに備える
前記(1)から(14)のいずれかに記載の情報処理装置。
(16)
フィードバック情報は、前記第1のユーザに対して、発話速度、または発話区切りの少なくとも一方を変更するように促す情報である
前記(15)に記載の情報処理装置。
(17)
フィードバック情報は、前記第2のユーザに対して、前記表示装置に表示された前記テキスト情報の読み取りを促す情報である
前記(15)または(16)に記載の情報処理装置。
(18)
前記第1のユーザの前記音声情報を前記テキスト情報に変換する音声認識部をさらに備え、
前記音声認識部は、前記情報処理装置の内部、または、インターネットを介して接続するサーバ上に設けられている
前記(1)から(17)のいずれかに記載の情報処理装置。
(19)
情報処理装置の情報処理方法において、
前記情報処理装置による、
音声入力装置に入力された第1のユーザの音声情報を取得する音声取得ステップと、
第2のユーザのための表示装置における、取得された前記音声情報に対応するテキスト情報の表示を制御する表示制御ステップと
を含み、
前記表示制御ステップは、前記表示装置における前記テキスト情報の表示量、または前記音声入力装置から入力された前記音声情報の入力量の少なくとも一方に基づいて、前記テキスト情報の表示量に関する制御を行う
情報処理方法。
(20)
第1のユーザの音声情報を取得する音声入力装置と、
取得された前記音声情報に対応するテキスト情報の表示を制御する表示制御装置と、
前記表示制御装置からの制御に従い、前記テキスト情報を第2のユーザのために表示する表示装置と
を備え、
前記表示制御装置は、前記表示装置における前記テキスト情報の表示量、または前記音声入力装置から入力された前記音声情報の入力量の少なくとも一方に基づいて、前記テキスト情報の表示量に関する制御を行う
情報処理システム。 The present technology can also have the following configurations.
(1)
A voice acquisition unit that acquires the voice information of the first user input to the voice input device;
A display control unit for controlling display of text information corresponding to the acquired voice information in a display device for a second user,
The display control unit controls the display amount of the text information based on at least one of the display amount of the text information on the display device or the input amount of the voice information input from the voice input device. Processing equipment.
(2)
The information processing apparatus according to (1), wherein the display control unit suppresses a display amount of the text information when a display amount of the text information becomes a predetermined amount or more.
(3)
The information processing apparatus according to (1) or (2), wherein the display control unit suppresses a display amount of the text information by suppressing a display amount of a predetermined part of speech included in the text information.
(4)
The information processing unit according to any one of (1) to (3), wherein the display control unit suppresses a display amount of the text information based on a predetermined operation by the first user or the second user. apparatus.
(5)
The predetermined operation includes a first operation by the first user or the second user,
The information processing apparatus according to (4), wherein the display control unit deletes the display of the text information based on the first operation after suppressing the display amount of the text information.
(6)
The predetermined operation includes a second operation by the first user or the second user,
The information processing apparatus according to (5), wherein after the display of the text information is erased, the display control unit displays the text information erased in the display device again based on the second operation. .
(7)
The information processing apparatus according to any one of (1) to (6), wherein the display control unit controls at least one of a line feed or a page break of the text information display according to the analysis result of the text information.
(8)
When one of the first user or the second user performs an operation related to the text information, information indicating that an operation related to the text information has been performed is used as information indicating the first user or the second user. The information processing apparatus according to any one of (1) to (7), further including a notification unit that notifies the other of the information.
(9)
When one of the first user or the second user performs an operation of suppressing the amount of display of the text information, the notification unit may notify the other of the first user or the second user, The information processing apparatus according to (8), which notifies that the display amount of the text information is suppressed.
(10)
When one of the first user or the second user performs an operation of deleting the display of the text information, the notification unit may notify the other of the first user or the second user, The information processing apparatus according to (8) or (9), wherein the information indicating that the display of the text information has been deleted is notified.
(11)
When the second user performs an operation for requesting re-speech of the text information displayed on the display device, the notification unit performs a notification to prompt the re-speech to the first user. ) To (10).
(12)
When the second user performs an operation for requesting an inquiry about the text information displayed on the display device, the notification unit has received an inquiry about the text information from the first user. The information processing apparatus according to any one of (8) to (11).
(13)
The display control unit suppresses a display amount of the text information on the display device based on a result of the second user's read detection based on at least one of the second user's utterance or action. The information processing apparatus according to any one of 1) to (12).
(14)
The information processing apparatus according to any one of (1) to (13), wherein the display control unit stops displaying the text information on the display device based on at least one of the utterance or action of the first user. .
(15)
A feedback control unit that controls notification of feedback information to at least one of the first user and the second user based on at least one of a display amount of the text information and an input amount of the audio information on the display device; The information processing apparatus according to any one of (1) to (14).
(16)
The information processing apparatus according to (15), wherein the feedback information is information that prompts the first user to change at least one of an utterance speed and an utterance break.
(17)
The information processing apparatus according to (15) or (16), wherein the feedback information is information that prompts the second user to read the text information displayed on the display device.
(18)
A voice recognition unit that converts the voice information of the first user into the text information;
The information processing apparatus according to any one of (1) to (17), wherein the voice recognition unit is provided inside the information processing apparatus or on a server connected via the Internet.
(19)
In the information processing method of the information processing apparatus,
According to the information processing apparatus,
A voice acquisition step of acquiring voice information of the first user input to the voice input device;
A display control step for controlling display of text information corresponding to the acquired voice information in a display device for a second user,
The display control step performs control related to the display amount of the text information based on at least one of the display amount of the text information on the display device or the input amount of the voice information input from the voice input device. Processing method.
(20)
A voice input device for acquiring voice information of the first user;
A display control device for controlling display of text information corresponding to the acquired voice information;
A display device for displaying the text information for a second user in accordance with control from the display control device;
The display control device performs control related to a display amount of the text information based on at least one of a display amount of the text information on the display device or an input amount of the voice information input from the voice input device. Processing system.
Claims (20)
- 音声入力装置に入力された第1のユーザの音声情報を取得する音声取得部と、
第2のユーザのための表示装置における、取得された前記音声情報に対応するテキスト情報の表示を制御する表示制御部と
を備え、
前記表示制御部は、前記表示装置における前記テキスト情報の表示量、または前記音声入力装置から入力された前記音声情報の入力量の少なくとも一方に基づいて、前記テキスト情報の表示量に関する制御を行う
情報処理装置。 A voice acquisition unit that acquires the voice information of the first user input to the voice input device;
A display control unit for controlling display of text information corresponding to the acquired voice information in a display device for a second user,
The display control unit controls the display amount of the text information based on at least one of the display amount of the text information on the display device or the input amount of the voice information input from the voice input device. Processing equipment. - 前記表示制御部は、前記テキスト情報の表示量が所定の量以上となった場合、前記テキスト情報の表示量を抑制する
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the display control unit suppresses a display amount of the text information when a display amount of the text information becomes a predetermined amount or more. - 前記表示制御部は、前記テキスト情報に含まれる所定の品詞の表示量を抑制することによって、前記テキスト情報の表示量を抑制する
請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the display control unit suppresses a display amount of the text information by suppressing a display amount of a predetermined part of speech included in the text information. - 前記表示制御部は、前記第1のユーザまたは前記第2のユーザによる所定の操作に基づいて、前記テキスト情報の表示量を抑制する
請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the display control unit suppresses a display amount of the text information based on a predetermined operation by the first user or the second user. - 前記所定の操作は、前記第1のユーザまたは前記第2のユーザによる第1の操作を含み、
前記表示制御部は、前記テキスト情報の表示量を抑制した後、前記第1の操作に基づいて、前記テキスト情報の表示を消去させる
請求項4に記載の情報処理装置。 The predetermined operation includes a first operation by the first user or the second user,
The information processing apparatus according to claim 4, wherein the display control unit deletes the display of the text information based on the first operation after suppressing the display amount of the text information. - 前記所定の操作は、前記第1のユーザまたは前記第2のユーザによる第2の操作を含み、
前記表示制御部は、前記テキスト情報の表示を消去させた後、前記第2の操作に基づいて、前記表示装置において消去させた前記テキスト情報を再び表示させる
請求項5に記載の情報処理装置。 The predetermined operation includes a second operation by the first user or the second user,
The information processing apparatus according to claim 5, wherein the display control unit displays the text information erased in the display device again based on the second operation after erasing the display of the text information. - 前記表示制御部は、前記テキスト情報の解析結果に従い、前記テキスト情報の表示の改行または改頁の少なくとも一方を制御する
請求項2に記載の情報処理装置。 The information processing apparatus according to claim 2, wherein the display control unit controls at least one of a line feed or a page break of the text information display according to the analysis result of the text information. - 前記第1のユーザまたは前記第2のユーザの一方が前記テキスト情報に関する操作を行った場合、前記テキスト情報に関する操作が行われたことを示す情報を、前記第1のユーザまたは前記第2のユーザの他方に対して通知する通知部をさらに備える
請求項1に記載の情報処理装置。 When one of the first user or the second user performs an operation related to the text information, information indicating that an operation related to the text information has been performed is used as information indicating the first user or the second user. The information processing apparatus according to claim 1, further comprising a notification unit that notifies the other of the information. - 前記通知部は、前記第1のユーザまたは前記第2のユーザの一方が、前記テキスト情報の表示量を抑制させる操作を行った場合、前記第1のユーザまたは前記第2のユーザの他方に、前記テキスト情報の表示量が抑制されたことを通知する
請求項8に記載の情報処理装置。 When one of the first user or the second user performs an operation of suppressing the amount of display of the text information, the notification unit may notify the other of the first user or the second user, The information processing apparatus according to claim 8, notifying that a display amount of the text information is suppressed. - 前記通知部は、前記第1のユーザまたは前記第2のユーザの一方が、前記テキスト情報の表示を消去する操作を行った場合、前記第1のユーザまたは前記第2のユーザの他方に、前記テキスト情報の表示が消去されたことを通知する
請求項8に記載の情報処理装置。 When one of the first user or the second user performs an operation of deleting the display of the text information, the notification unit may notify the other of the first user or the second user, The information processing apparatus according to claim 8, which notifies that the display of text information has been erased. - 前記通知部は、前記第2のユーザが、前記表示装置に表示された前記テキスト情報の再発話を要求する操作を行った場合、前記第1のユーザに再発話を促す通知を行う
請求項8に記載の情報処理装置。 The notification unit, when the second user performs an operation for requesting a re-speech of the text information displayed on the display device, performs a notification that prompts the first user to re-speak. The information processing apparatus described in 1. - 前記通知部は、前記第2のユーザが、前記表示装置に表示された前記テキスト情報に関する問い合わせを要求するための操作を行った場合、前記第1のユーザに前記テキスト情報に関する問い合わせがあったことを通知する
請求項8に記載の情報処理装置。 When the second user performs an operation for requesting an inquiry about the text information displayed on the display device, the notification unit has received an inquiry about the text information from the first user. The information processing apparatus according to claim 8. - 前記表示制御部は、前記第2のユーザの発声または動作の少なくとも一方に基づく前記第2のユーザの既読検知の結果に基づいて、前記表示装置における前記テキスト情報の表示量を抑制する
請求項1に記載の情報処理装置。 The display control unit suppresses a display amount of the text information on the display device based on a result of the second user's read detection based on at least one of the second user's utterance or action. The information processing apparatus according to 1. - 前記表示制御部は、前記第1のユーザの発声または動作の少なくとも一方に基づき、前記表示装置における前記テキスト情報の表示を中止する
請求項1に記載の情報処理装置。 The information processing apparatus according to claim 1, wherein the display control unit stops displaying the text information on the display device based on at least one of the first user's utterance or action. - 前記表示装置における前記テキスト情報の表示量または前記音声情報の入力量の少なくとも一方に基づき、前記第1のユーザまたは前記第2のユーザの少なくとも一方に対するフィードバック情報の通知を制御するフィードバック制御部をさらに備える
請求項1に記載の情報処理装置。 A feedback control unit that controls notification of feedback information to at least one of the first user and the second user based on at least one of a display amount of the text information and an input amount of the audio information on the display device; The information processing apparatus according to claim 1. - フィードバック情報は、前記第1のユーザに対して、発話速度、または発話区切りの少なくとも一方を変更するように促す情報である
請求項15に記載の情報処理装置。 The information processing apparatus according to claim 15, wherein the feedback information is information prompting the first user to change at least one of an utterance speed and an utterance break. - フィードバック情報は、前記第2のユーザに対して、前記表示装置に表示された前記テキスト情報の読み取りを促す情報である
請求項15に記載の情報処理装置。 The information processing apparatus according to claim 15, wherein the feedback information is information that prompts the second user to read the text information displayed on the display device. - 前記第1のユーザの前記音声情報を前記テキスト情報に変換する音声認識部をさらに備え、
前記音声認識部は、前記情報処理装置の内部、または、インターネットを介して接続するサーバ上に設けられている
請求項1に記載の情報処理装置。 A voice recognition unit that converts the voice information of the first user into the text information;
The information processing apparatus according to claim 1, wherein the voice recognition unit is provided inside the information processing apparatus or on a server connected via the Internet. - 情報処理装置の情報処理方法において、
前記情報処理装置による、
音声入力装置に入力された第1のユーザの音声情報を取得する音声取得ステップと、
第2のユーザのための表示装置における、取得された前記音声情報に対応するテキスト情報の表示を制御する表示制御ステップと
を含み、
前記表示制御ステップは、前記表示装置における前記テキスト情報の表示量、または前記音声入力装置から入力された前記音声情報の入力量の少なくとも一方に基づいて、前記テキスト情報の表示量に関する制御を行う
情報処理方法。 In the information processing method of the information processing apparatus,
According to the information processing apparatus,
A voice acquisition step of acquiring voice information of the first user input to the voice input device;
A display control step for controlling display of text information corresponding to the acquired voice information in a display device for a second user,
The display control step performs control related to the display amount of the text information based on at least one of the display amount of the text information on the display device or the input amount of the voice information input from the voice input device. Processing method. - 第1のユーザの音声情報を取得する音声入力装置と、
取得された前記音声情報に対応するテキスト情報の表示を制御する表示制御装置と、
前記表示制御装置からの制御に従い、前記テキスト情報を第2のユーザのために表示する表示装置と
を備え、
前記表示制御装置は、前記表示装置における前記テキスト情報の表示量、または前記音声入力装置から入力された前記音声情報の入力量の少なくとも一方に基づいて、前記テキスト情報の表示量に関する制御を行う
情報処理システム。 A voice input device for acquiring voice information of the first user;
A display control device for controlling display of text information corresponding to the acquired voice information;
A display device for displaying the text information for a second user in accordance with control from the display control device;
The display control device performs control related to a display amount of the text information based on at least one of a display amount of the text information on the display device or an input amount of the voice information input from the voice input device. Processing system.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018554906A JP6950708B2 (en) | 2016-12-05 | 2017-11-21 | Information processing equipment, information processing methods, and information processing systems |
US16/349,731 US11189289B2 (en) | 2016-12-05 | 2017-11-21 | Information processing device, information processing method, and information processing system |
KR1020197014972A KR20190091265A (en) | 2016-12-05 | 2017-11-21 | Information processing apparatus, information processing method, and information processing system |
DE112017006145.8T DE112017006145T5 (en) | 2016-12-05 | 2017-11-21 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662430000P | 2016-12-05 | 2016-12-05 | |
US62/430,000 | 2016-12-05 | ||
JP2017-074369 | 2017-04-04 | ||
JP2017074369 | 2017-04-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018105373A1 true WO2018105373A1 (en) | 2018-06-14 |
Family
ID=62491200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/041758 WO2018105373A1 (en) | 2016-12-05 | 2017-11-21 | Information processing device, information processing method, and information processing system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018105373A1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020039014A (en) * | 2018-08-31 | 2020-03-12 | 株式会社コロプラ | Program, information processing device, and method |
JP2020126195A (en) * | 2019-02-06 | 2020-08-20 | トヨタ自動車株式会社 | Voice interactive device, control device for voice interactive device and control program |
JP2022056592A (en) * | 2020-09-30 | 2022-04-11 | 本田技研工業株式会社 | Conversation support device, conversation support system, conversation support method, and program |
WO2022270456A1 (en) * | 2021-06-21 | 2022-12-29 | ピクシーダストテクノロジーズ株式会社 | Display control device, display control method, and program |
JP7517366B2 (en) | 2021-08-16 | 2024-07-17 | 株式会社リコー | Voice recording management system, voice recording management device, voice recording management method and program |
WO2024150633A1 (en) * | 2023-01-13 | 2024-07-18 | ソニーグループ株式会社 | Information processing device, information processing method and information processing program |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09127459A (en) * | 1995-11-02 | 1997-05-16 | Canon Inc | Display device provided with gaze detection system |
US6172685B1 (en) * | 1997-11-24 | 2001-01-09 | Intel Corporation | Method and apparatus for increasing the amount and utility of displayed information |
JP2008097104A (en) * | 2006-10-06 | 2008-04-24 | Sharp Corp | Device for exchanging message information and its operation method |
JP2013235556A (en) * | 2012-05-07 | 2013-11-21 | Lg Electronics Inc | Method for displaying text associated with audio file, and electronic device implementing the same |
JP2014164692A (en) * | 2013-02-27 | 2014-09-08 | Yahoo Japan Corp | Document display device, document display method and document display program |
JP2015069600A (en) * | 2013-09-30 | 2015-04-13 | 株式会社東芝 | Voice translation system, method, and program |
WO2016103415A1 (en) * | 2014-12-25 | 2016-06-30 | 日立マクセル株式会社 | Head-mounted display system and operating method for head-mounted display device |
-
2017
- 2017-11-21 WO PCT/JP2017/041758 patent/WO2018105373A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09127459A (en) * | 1995-11-02 | 1997-05-16 | Canon Inc | Display device provided with gaze detection system |
US6172685B1 (en) * | 1997-11-24 | 2001-01-09 | Intel Corporation | Method and apparatus for increasing the amount and utility of displayed information |
JP2008097104A (en) * | 2006-10-06 | 2008-04-24 | Sharp Corp | Device for exchanging message information and its operation method |
JP2013235556A (en) * | 2012-05-07 | 2013-11-21 | Lg Electronics Inc | Method for displaying text associated with audio file, and electronic device implementing the same |
JP2014164692A (en) * | 2013-02-27 | 2014-09-08 | Yahoo Japan Corp | Document display device, document display method and document display program |
JP2015069600A (en) * | 2013-09-30 | 2015-04-13 | 株式会社東芝 | Voice translation system, method, and program |
WO2016103415A1 (en) * | 2014-12-25 | 2016-06-30 | 日立マクセル株式会社 | Head-mounted display system and operating method for head-mounted display device |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020039014A (en) * | 2018-08-31 | 2020-03-12 | 株式会社コロプラ | Program, information processing device, and method |
JP2020126195A (en) * | 2019-02-06 | 2020-08-20 | トヨタ自動車株式会社 | Voice interactive device, control device for voice interactive device and control program |
JP7120060B2 (en) | 2019-02-06 | 2022-08-17 | トヨタ自動車株式会社 | VOICE DIALOGUE DEVICE, CONTROL DEVICE AND CONTROL PROGRAM FOR VOICE DIALOGUE DEVICE |
JP2022056592A (en) * | 2020-09-30 | 2022-04-11 | 本田技研工業株式会社 | Conversation support device, conversation support system, conversation support method, and program |
JP7369110B2 (en) | 2020-09-30 | 2023-10-25 | 本田技研工業株式会社 | Conversation support device, conversation support system, conversation support method and program |
WO2022270456A1 (en) * | 2021-06-21 | 2022-12-29 | ピクシーダストテクノロジーズ株式会社 | Display control device, display control method, and program |
JP7517366B2 (en) | 2021-08-16 | 2024-07-17 | 株式会社リコー | Voice recording management system, voice recording management device, voice recording management method and program |
WO2024150633A1 (en) * | 2023-01-13 | 2024-07-18 | ソニーグループ株式会社 | Information processing device, information processing method and information processing program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018105373A1 (en) | Information processing device, information processing method, and information processing system | |
JP6710740B2 (en) | Providing suggested voice-based action queries | |
CN106463114B (en) | Information processing apparatus, control method, and program storage unit | |
US10592095B2 (en) | Instantaneous speaking of content on touch devices | |
US20210193146A1 (en) | Multi-modal interaction between users, automated assistants, and other computing services | |
WO2016103988A1 (en) | Information processing device, information processing method, and program | |
US11462213B2 (en) | Information processing apparatus, information processing method, and program | |
US11200893B2 (en) | Multi-modal interaction between users, automated assistants, and other computing services | |
US20120260176A1 (en) | Gesture-activated input using audio recognition | |
KR102193029B1 (en) | Display apparatus and method for performing videotelephony using the same | |
WO2019107145A1 (en) | Information processing device and information processing method | |
WO2016152200A1 (en) | Information processing system and information processing method | |
JP6950708B2 (en) | Information processing equipment, information processing methods, and information processing systems | |
WO2017175442A1 (en) | Information processing device and information processing method | |
US12125486B2 (en) | Multi-modal interaction between users, automated assistants, and other computing services | |
KR20140111574A (en) | Apparatus and method for performing an action according to an audio command | |
CN117971154A (en) | Multimodal response | |
WO2015156011A1 (en) | Information processing device, information processing method, and program | |
US11150923B2 (en) | Electronic apparatus and method for providing manual thereof | |
US11430429B2 (en) | Information processing apparatus and information processing method | |
JPWO2020116001A1 (en) | Information processing device and information processing method | |
KR101508444B1 (en) | Display device and method for executing hyperlink using the same | |
JP5613102B2 (en) | CONFERENCE DEVICE, CONFERENCE METHOD, AND CONFERENCE PROGRAM | |
JP2019179081A (en) | Conference support device, conference support control method, and program | |
WO2020158218A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17879093 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2018554906 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20197014972 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17879093 Country of ref document: EP Kind code of ref document: A1 |