WO2024150703A1 - Information processing system, information processing method, and method for generating learning model - Google Patents
Information processing system, information processing method, and method for generating learning model Download PDFInfo
- Publication number
- WO2024150703A1 WO2024150703A1 PCT/JP2023/047268 JP2023047268W WO2024150703A1 WO 2024150703 A1 WO2024150703 A1 WO 2024150703A1 JP 2023047268 W JP2023047268 W JP 2023047268W WO 2024150703 A1 WO2024150703 A1 WO 2024150703A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- user
- information processing
- processing system
- response
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 111
- 238000003672 processing method Methods 0.000 title claims description 7
- 238000000034 method Methods 0.000 title description 45
- 201000010099 disease Diseases 0.000 claims abstract description 19
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 claims abstract description 19
- 230000004044 response Effects 0.000 claims description 149
- 238000005259 measurement Methods 0.000 claims description 18
- 238000004458 analytical method Methods 0.000 claims description 13
- 230000007613 environmental effect Effects 0.000 claims description 9
- 208000023504 respiratory system disease Diseases 0.000 claims description 6
- 208000024891 symptom Diseases 0.000 claims description 5
- 238000007726 management method Methods 0.000 description 118
- 230000008569 process Effects 0.000 description 36
- 208000006673 asthma Diseases 0.000 description 17
- 238000010586 diagram Methods 0.000 description 15
- 238000001514 detection method Methods 0.000 description 8
- 230000002354 daily effect Effects 0.000 description 7
- 230000005713 exacerbation Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000003595 spectral effect Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000009118 appropriate response Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 208000006545 Chronic Obstructive Pulmonary Disease Diseases 0.000 description 2
- 206010011224 Cough Diseases 0.000 description 2
- 208000018428 Eosinophilic granulomatosis with polyangiitis Diseases 0.000 description 2
- 208000037656 Respiratory Sounds Diseases 0.000 description 2
- 238000003915 air pollution Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 210000004072 lung Anatomy 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000005069 pulmonary fibrosis Diseases 0.000 description 2
- 238000000611 regression analysis Methods 0.000 description 2
- 208000027932 Collagen disease Diseases 0.000 description 1
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 1
- 206010070245 Foreign body Diseases 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 201000009267 bronchiectasis Diseases 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 208000030603 inherited susceptibility to asthma Diseases 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 208000020816 lung neoplasm Diseases 0.000 description 1
- 208000037841 lung tumor Diseases 0.000 description 1
- 230000001394 metastastic effect Effects 0.000 description 1
- 206010061289 metastatic neoplasm Diseases 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000036387 respiratory rate Effects 0.000 description 1
- 201000000306 sarcoidosis Diseases 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000013125 spirometry Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Definitions
- This disclosure relates to an information processing system, an information processing method, and a method for generating a learning model.
- Peak flow meters are inexpensive, easy-to-use devices approved by medical insurance that can measure peak flow values. Peak flow values are the maximum instantaneous speed of airflow when breathing out with all one's might, and are a numerical value that makes it possible to objectively grasp the state of asthma. They are used by doctors as reference information for confirming treatment plans and diagnosis, and by patients as an indicator for daily management. Patients record the measured peak flow values, as well as the state of their daily life, such as the occurrence of attacks and medication status, in an asthma diary.
- the information processing system includes a management index estimation unit that estimates a management index value related to a user's medical condition based on the user's voice information.
- a computer estimates a management index value related to a user's medical condition based on the user's voice information.
- FIG. 1 is a diagram illustrating a configuration example of an information processing system according to an embodiment of the present disclosure.
- FIG. 1 is a diagram illustrating an example of a configuration of a portion of an information processing device according to an embodiment of the present disclosure.
- 11 is a flowchart illustrating an example of a management index estimation process according to an embodiment of the present disclosure.
- FIG. 11 is a diagram for explaining an example of a management index estimation process according to an embodiment of the present disclosure.
- 13 is a flowchart illustrating an example of a generation process of a management index estimation model according to an embodiment of the present disclosure.
- 11 is a diagram for explaining an example of a generation process of a management index estimation model according to an embodiment of the present disclosure.
- FIG. 1 is a diagram illustrating a configuration example of an information processing system according to an embodiment of the present disclosure.
- FIG. 1 is a diagram illustrating an example of a configuration of a portion of an information processing device according to an embodiment of the present disclosure.
- 11 is
- each embodiment can be implemented independently. However, at least a portion of the following embodiments may be implemented in appropriate combination with at least a portion of the other embodiments. These embodiments may include novel features that are different from one another. Thus, each embodiment may contribute to solving a different purpose or problem, and may provide different effects.
- Embodiment 1-1 Example of the configuration of an information processing system 1-2.
- Embodiment ⁇ 1-1 Example of information processing system configuration> A configuration example of an information processing system 1 according to the present embodiment will be described with reference to Fig. 1.
- Fig. 1 is a diagram showing a configuration example of an information processing system 1 according to the present embodiment.
- the information processing system 1 functions as a voice dialogue system that supports disease treatment.
- the information processing system 1 includes a sound input unit 10, a biometric information detection unit 20, a sound output unit 30, a user terminal 40, and an information processing device 50.
- Various information is transmitted and received between the sound input unit 10, the biometric information detection unit 20, the sound output unit 30, the user terminal 40, and the information processing device 50. This transmission and reception is performed via wireless and/or wired communication networks, wiring, etc.
- the sound input unit 10 detects sounds such as voice and inputs them to the information processing device 50.
- the sound input unit 10 detects user speech and inputs it to the information processing device 50.
- a microphone is used as the sound input unit 10.
- User utterances are voices that a user speaks to the information processing system 1 to obtain a response. For example, a user may say, "What's the weather going to be like in Tokyo tomorrow?" or "What are your plans for today?" Voice information related to user utterances is an example of user voice information.
- the biometric information detection unit 20 detects the user's biometric information and inputs it to the information processing device 50.
- a wearable device is used as the biometric information detection unit 20.
- wearable devices such as wristband type, neckband type, and earphone type.
- User biometric information is biometric information obtained from a user. This user biometric information is collected implicitly when the user gives permission to the information processing system 1 to collect the information in advance.
- the user biometric information includes heart rate, sleep state, amount of exercise, pulse, blood pressure, blood flow, etc., collected by the biometric information detection unit 20 worn by the user.
- the sound output unit 30 outputs sounds such as voice.
- the sound output unit 30 outputs sounds based on response information or the like.
- a speaker such as a smart speaker is used as the sound output unit 30.
- the user terminal 40 is a terminal for a user.
- the user terminal 40 presents various information to the user through displays, sounds, etc.
- An example of the user terminal 40 is a smartphone.
- the information processing device 50 has a management index estimation unit 51, a management index database 52, a voice recognition unit 53, a semantic analysis unit 54, a response generation database 55, a response generation unit 56, and a response control unit 57.
- the control index estimation unit 51 estimates a control index value related to the user's medical condition by analyzing the user's speech and the user's biometric information, and outputs control index information related to the control index value. Note that if the control index estimation unit 51 is unable to obtain user biometric information, it may estimate the control index value by analyzing only the user's speech.
- the management index value is an objective numerical value for understanding and managing the user's medical condition. Furthermore, the management index information is information that includes the management index value estimated by the management index estimation unit 51.
- the management index database 52 is a database that records the management index information (user's management index value) output from the management index estimation unit 51.
- management index values for diseases such as asthma include forced vital capacity (FVC), forced expiratory volume in one second (FEV1), rate of expiratory volume in one second (FEV1%), predicted forced expiratory volume in one second (%FEV1), and peak flow (PEF), all of which are measured using a spirometer. Any one or more of these may be used as the management index value.
- FVC forced vital capacity
- FEV1 forced expiratory volume in one second
- FEV1% rate of expiratory volume in one second
- %FEV1 predicted forced expiratory volume in one second
- PEF peak flow
- the voice recognition unit 53 converts the voice spoken by the user into a spoken string.
- the spoken string is a character string spoken by the user.
- the semantic analysis unit 54 analyzes the spoken string generated by the speech recognition unit 53 to generate the first response generation information required by the response generation unit 56 to generate response information.
- the first response generation information is information that is formed by analyzing the intention of the user's utterance by the semantic analysis unit 54 so that the response generation unit 56 can generate response information.
- the first response generation information for a user utterance of "What's the weather in Tokyo today?" is "Subject: weather, date and time: today, location: Tokyo", etc.
- the response generation database 55 is a database that records the second response generation information required by the response generation unit 56 to generate response information.
- the response generation database 55 stores in advance the information required to generate response information.
- the second response generation information is information that is formed so that the response generation unit 56 can generate response information together with the first response generation information.
- the second response generation information includes environmental information about the user, such as weather forecasts and air pollution information, as well as the user's schedule information.
- Environmental information about the user includes, for example, environmental information about the user's living area, including the user's home, workplace, shopping destinations, etc., but may also include environmental information about the user's travel destinations, etc.
- the user's schedule information includes information about the user's plans, such as the content of the errand, date and time, and location.
- the response generation unit 56 generates response information for the user from the first response generation information input from the semantic analysis unit 54, the management index information input from the management index database 52, and the second response generation information input from the response generation database 55.
- the response information includes various types of information for responding to the user.
- the various types of information include, for example, information according to the content of the user's utterance, information about the user's current condition, and reference information for preventing the user's condition from worsening in the future. For example, if the user utterance is "What's the weather in Tokyo today?", the response information may be "It's sunny today. Your asthma has been getting worse since yesterday, so be careful.”
- the response control unit 57 performs a system response based on the response information input from the response generation unit 56, providing information tailored to the device used by the user.
- the response control unit 57 responds by voice, and if the device used by the user is a user terminal 40 such as a smart watch or smartphone, the response control unit 57 provides a notification by text.
- each of the functional units such as the above-mentioned management index estimation unit 51, speech recognition unit 53, semantic analysis unit 54, response generation unit 56, and response control unit 57, may be configured with both hardware and software, or one of them. Their configuration is not particularly limited.
- each of the above-mentioned functional units may be realized by a computer, such as a CPU (Central Processing Unit) or MPU (Micro Control Unit), executing a program pre-stored in ROM using a RAM or the like as a working area.
- each of the functional units may be realized with an integrated circuit, such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array).
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- Fig. 2 is a diagram showing an example of the configuration of a portion of the information processing device 50 according to this embodiment.
- the management index estimation unit 51 estimates the management index value based on the management index estimation model 58a.
- This management index estimation model 58a is generated by the model generation unit 58.
- the model generation unit 58 generates a management index estimation model 58a by machine learning, for example, based on the speech voice database 58b and the management index measurement value database 58c.
- the management index estimation model 58a is, for example, a model that performs regression analysis on data collected in advance.
- Such a model generation unit 58 may be provided in the information processing device 50, or may be provided in a device other than the information processing device 50.
- Fig. 3 is a flowchart showing an example of the management index estimation process according to the present embodiment.
- Fig. 4 is a diagram for explaining an example of the management index estimation process according to the present embodiment.
- step S11 the control index estimation unit 51 acquires the user's speech input from the sound input unit 10.
- step S12 the control index estimation unit 51 calculates acoustic features from the acquired user's speech.
- step S13 the control index estimation unit 51 estimates a control index value from the aforementioned acoustic features using the control index estimation model 58a.
- step S14 the control index estimation unit 51 outputs the estimated control index value, i.e., the control index estimated value, to the control index database 52, and ends the process.
- the management index estimation unit 51 estimates the management index value from the user's speech, i.e., the user's speech, in order to reduce the measurement load on the user and to implicitly grasp the user's condition while using the information processing system 1.
- the management index estimation unit 51 processes the user's speech through an acoustic feature calculation process to calculate the acoustic feature.
- An acoustic feature is a numerical value (vector) that represents the characteristics of a sound.
- acoustic features include MFCC (Mel Frequency Cepstral Coefficients), zero cross, spectral centroid, spectral flatness, and spectral rolloff.
- the control index estimation unit 51 estimates the control index value by performing a control index value estimation process using the above-mentioned acoustic feature amount and the control index estimation model 58a obtained by prior learning. This allows the control index value, i.e., the control index estimated value, to be obtained.
- the control index estimation process is a process of calculating a control index estimate value by regression using, for example, control index estimation model 58a.
- Control index estimation model 58a is, for example, a model that performs regression analysis on data collected in advance using acoustic features as explanatory variables and control index values as objective variables. This control index estimation model 58a is generated in advance by machine learning or the like.
- Fig. 5 is a flowchart showing an example of the management index estimation model generation process according to the present embodiment.
- Fig. 6 is a diagram for explaining an example of the management index estimation model generation process according to the present embodiment.
- step S21 the model generation unit 58 acquires the patient's speech data from the speech database 58b, and acquires the management index measurement values corresponding to the speech data from the management index measurement value database 58c.
- step S22 the model generation unit 58 calculates acoustic features from the acquired speech data.
- step S23 the model generation unit 58 performs model learning to generate a management index estimation model 58a using the calculated acoustic features and management index measurement values.
- step S24 the model generation unit 58 stores the generated management index estimation model 58a.
- the model generation unit 58 generates the management index estimation model 58a in advance.
- the management index estimation model 58a may be caused to execute re-learning using a user utterance and an estimated management index value corresponding to the user utterance, thereby updating the management index estimation model 58a.
- the model generation unit 58 collects in advance, for learning purposes, pairs of speech data (speech information) from asthma patients with various symptoms and the control index measurement values corresponding to the speech data.
- the control index measurement values corresponding to the speech data are, for example, control index measurement values measured by spirometry when the corresponding speech data is acquired. Speech data for patients with different symptoms and control index measurement values for each of the speech data are prepared in advance.
- the model generation unit 58 calculates acoustic features from the speech data, estimates a control index value from the acoustic features, and performs learning so as to minimize the error between the estimated control index value and the control index measurement value corresponding to that control index value, thereby generating a control index estimation model 58a.
- the user's speech is analyzed, but if the system can collect the sounds, coughing and breathing sounds can also be analyzed. Therefore, coughing and breathing sounds are included in the speech.
- the system can acquire user biometric information, vital signs such as heart rate and respiratory rate, and facial images can also be analyzed to estimate the management index value.
- 12-dimensional MFCC Mel Frequency Cepstral Coefficients
- 12-dimensional MFCC Mel Frequency Cepstral Coefficients
- acoustic features such as statistics such as the median, maximum value, minimum value, standard deviation, skewness, and kurtosis, as well as mel spectrogram, chroma vector, zero cross, spectral centroid, spectral flatness, and spectral rolloff.
- the priority order of the control index values is as follows: (1) rate in one second, (2) forced expiratory volume in one second relative to a predicted value, (3) peak flow, (4) forced vital capacity, (5) forced vital capacity relative to a predicted value, and (5) forced expiratory volume in one second. The smaller the number in parentheses, the higher the priority order. This priority order is, for example, the importance of understanding the condition.
- Respiratory diseases include asthma (bronchial asthma), but other respiratory diseases include chronic obstructive pulmonary disease (COPD) and pulmonary fibrosis.
- the management index value for chronic obstructive pulmonary disease is, for example, forced expiratory volume in 1 second (FEV1%).
- the management index value for pulmonary fibrosis is, for example, forced vital capacity (FVC).
- EGPA eosinophilic granulomatosis with polyangiitis
- sarcoidosis lung
- collagen disease of the lung metastatic lung tumor
- bronchiectasis bronchiectasis
- foreign bodies in the airway eosinophilic granulomatosis with polyangiitis (EGPA), sarcoidosis (lung), collagen disease of the lung, metastatic lung tumor, bronchiectasis, and foreign bodies in the airway.
- the management index value is set appropriately according to the type of disease.
- the management index value (management index estimated value) is estimated by the management index estimation unit 51 and stored in the management index database 52 for each user.
- the management index value is stored for each user utterance and managed for each user.
- Such management index information may be configured to be viewable by the user's doctor, family, etc.
- the user's management index information may be appropriately transmitted to a terminal of the user's doctor, family, etc. in response to access from that terminal.
- Model learning In this embodiment, linear regression is used as the regression model, but it is also assumed that gradient boosting decision tree, support vector regression, model learning by deep learning, etc. are also used. In this embodiment, the system learns using data collected in advance and estimates the management index value based on the learned model, but it is also assumed that a function for personal optimization of the model by re-learning using the voice input when the user uses the service and for improving the accuracy of the entire model are introduced.
- Fig. 7 is a flowchart showing an example of the response process according to this embodiment.
- step S31 the response generation unit 56 acquires response generation information (first response generation information) from the semantic analysis unit 54.
- step S32 the response generation unit 56 refers to the management index database 52 and determines whether the management index value is available.
- step S33 it refers to the response generation database 55 and determines whether there is available information regarding exacerbation in the response generation database 55.
- step S32 determines in step S32 that the management index value is not available (No in step S32)
- normal response information is generated in step S36, and the process ends.
- step S33 the response generation unit 56 determines that there is no information regarding exacerbation available in the response generation database 55 (No in step S33), in step S35 it generates response information including information regarding the condition, and ends the process.
- the response generating unit 56 generates normal response information when it is unable to acquire a management index value from the management index database 52. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates the response information "It's sunny today.”
- the response generating unit 56 When the response generating unit 56 can acquire a management index value from the management index database 52, the response generating unit 56 generates response information including information on the medical condition in addition to normal response information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information saying "It's sunny today. Your current asthma condition is good.”
- the response generation unit 56 may generate response information according to changes in the most recent continuous information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generation unit 56 generates response information such as "It's sunny today. Your asthma condition has been worsening since yesterday, so be careful.”
- the response generating unit 56 When the response generating unit 56 can obtain environmental information such as the user's schedule, pollen, and air pollution together with the management index value, the response generating unit 56 generates response information including information for suppressing exacerbation together with normal response information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information such as "It's sunny today. Your current asthma condition is good, but there is a lot of pollen scattered, so be careful when going out.” Also, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information such as "It's sunny today. We're planning to have dinner from 19:00, but your asthma condition has been getting worse since yesterday, so be careful not to drink too much alcohol.”
- responses to the user may be provided by notifying a message on the screen.
- a user terminal 40 or the like is used.
- the sound input unit 10 is used for inputting voice to the information processing system 1
- the user terminal 40 is used for providing information by displaying a screen, but for example, the user terminal 40 such as a smartphone may be used for inputting voice to the information processing system 1.
- the user terminal 40 functions as the sound input unit 10.
- the condition of a user who uses the voice dialogue system which is the information processing system 1
- the voice dialogue system which is the information processing system 1
- the system can generate information to prevent the condition from worsening based on the condition estimated from the user's speech, the user's predicted behavior information, and environmental information such as weather, and present it to the user in an easy-to-understand manner. Therefore, the user can measure the condition of the disease unconsciously and without physical burden through voice dialogue with the system, and can receive information required to prevent the condition from worsening from the system.
- Fig. 8 to Fig. 11 are diagrams showing configuration examples of the information processing systems 1A to 1D according to the modifications of the present embodiment.
- the information processing system 1A includes a wearable device 110, a smartphone 120, and a server 200.
- the wearable device 110 and the smartphone 120 are configured to be able to communicate with the server 200. Data detected by the wearable device 110 is sent directly to the server 200.
- information processing system 1B like information processing system 1A described above, includes wearable device 110, smartphone 120, and server 200.
- Wearable device 110 is configured to be able to communicate with smartphone 120
- smartphone 120 is configured to be able to communicate with server 200.
- Data detected by wearable device 110 is first stored in smartphone 120 and then transmitted to server 200.
- the wearable device 110 corresponds to the biometric information detection unit 20 described above
- the smartphone 120 corresponds to the sound input unit 10 and sound output unit 30 described above.
- the smartphone 120 includes the sound input unit 10 and sound output unit 30.
- the server 200 corresponds to the information processing device 50.
- the information processing system 1C includes a wearable device 110, a smartphone 120, a server 200, and a service provider server 300.
- the wearable device 110 and the smartphone 120 are configured to be able to communicate with the server 200.
- the server 200 is configured to be able to communicate with the service provider server 300. Data detected by the wearable device 110 is sent directly to the server 200.
- the device owned by the service provider is shown to be a server, i.e., the service provider server 300.
- the device owned by the service provider does not necessarily have to be a server, and may be an information terminal such as a smartphone, a tablet terminal, a laptop computer, or a desktop computer.
- the information processing system 1 includes a management index estimation unit 51 that estimates a management index value related to the user's medical condition based on the user's voice information (e.g., user utterance). This makes it possible to obtain the management index value from the user's voice information, making it possible to quantitatively grasp the user's medical condition while reducing the burden on the user.
- a management index estimation unit 51 that estimates a management index value related to the user's medical condition based on the user's voice information (e.g., user utterance). This makes it possible to obtain the management index value from the user's voice information, making it possible to quantitatively grasp the user's medical condition while reducing the burden on the user.
- the management index value may also be a management index value related to the condition of the user's disease when the user's disease is a respiratory disease. This makes it possible to quantitatively grasp the condition of the user's disease when the user's disease is a respiratory disease.
- the control index estimation unit 51 may also estimate the control index value based on the user's voice information and the user's biometric information. This allows the control index value to be determined with high accuracy.
- the control index estimation unit 51 may also calculate acoustic features from the user's voice information and estimate the control index value based on the calculated acoustic features. This allows the control index value to be determined with high accuracy.
- the management index estimation unit 51 may also estimate the management index value using the management index estimation model 58a, which is a learning model. This allows the management index value to be determined with high accuracy.
- the information processing system 1 may further include a model generation unit 58 that generates a management index estimation model 58a. This makes it possible to reliably obtain the management index estimation model 58a.
- the model generation unit 58 may also generate the management index estimation model 58a using voice information for each patient with different symptoms related to the user's disease and the management index measurement value for each voice information. This makes it possible to obtain a highly accurate management index estimation model 58a.
- the model generation unit 58 may also calculate acoustic features from the user's voice information, estimate a control index value from the calculated acoustic features, and perform learning so as to minimize the error between the estimated control index value and the control index measurement value corresponding to the control index value, thereby generating a control index estimation model 58a. This makes it possible to obtain a highly accurate control index estimation model 58a.
- the information processing system 1 may further include a management index database 52 that stores the management index values. This allows the management index values to be managed.
- the information processing system 1 may further include a response generation unit 56 that generates response information regarding the user's medical condition based on the management index value. This allows the user to know the medical condition.
- the response information may also include one or both of information regarding the user's current condition and information regarding the worsening of the user's condition. This allows the user to know the current state of the condition or the worsening of the condition.
- the information regarding the worsening of the user's condition may include information for preventing the worsening of the user's condition. This can help prevent the user's condition from worsening.
- the response generation unit 56 may also generate response information based on the management index value and the first response generation information. This makes it possible to obtain appropriate response information.
- the first response generation information may also include information for generating response information according to the intention of the user's speech regarding the user's voice information. This makes it possible to obtain appropriate response information.
- the information processing system 1 may further include a semantic analysis unit 54 that analyzes the intention of the user's utterance and generates the first response generation information. This makes it possible to obtain appropriate first response generation information.
- the response generation unit 56 may also generate response information based on the management index value, the first response generation information, and the second response generation information that is different from the first response generation information. This makes it possible to obtain appropriate response information.
- the second response generation information may also include one or both of environmental information about the user and schedule information about the user. This makes it possible to obtain appropriate response information.
- the information processing system 1 may further include a response generation database 55 that stores the second response generation information. This allows the second response generation information to be managed.
- each configuration and each process according to the above-mentioned embodiment may be implemented in various different forms other than the above-mentioned embodiment.
- the configuration and process may be various forms without being limited to the above-mentioned example.
- all or part of the processes described as being performed automatically can be performed manually, or all or part of the processes described as being performed manually can be performed automatically by a known method.
- the configurations, processing procedures, specific names, or information including various data and parameters shown in the above documents and drawings can be changed arbitrarily unless otherwise specified.
- the various information shown in each figure is not limited to the information shown in the figure.
- each configuration and each process of the above-mentioned embodiment does not necessarily have to be physically configured as illustrated.
- the specific form of distribution and integration of each device is not limited to that illustrated, and all or part of it can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc.
- FIG. 12 is a diagram showing a hardware configuration example according to this embodiment.
- computer 500 has a CPU 510, RAM 520, ROM (Read Only Memory) 530, HDD (Hard Disk Drive) 540, a communication interface 550, and an input/output interface 560. Each part of computer 500 is connected by a bus 570.
- the CPU 510 operates based on the programs stored in the ROM 530 or the HDD 540 and controls each part. For example, the CPU 510 loads the programs stored in the ROM 530 or the HDD 540 into the RAM 520 and executes processes corresponding to the various programs.
- the ROM 530 stores boot programs such as the BIOS (Basic Input Output System) that is executed by the CPU 510 when the computer 500 starts up, as well as programs that depend on the hardware of the computer 500.
- BIOS Basic Input Output System
- HDD 540 is a recording medium readable by computer 500 that non-temporarily records programs executed by CPU 510 and data used by such programs.
- HDD 540 is a recording medium that records the information processing program related to the present disclosure, which is an example of program data 541.
- the communication interface 550 is an interface for connecting the computer 500 to an external network 580 (the Internet as an example).
- the CPU 510 receives data from other devices and transmits data generated by the CPU 510 to other devices via the communication interface 550.
- the input/output interface 560 is an interface for connecting the input/output device 590 and the computer 500.
- the CPU 510 receives data from an input device such as a keyboard or a mouse via the input/output interface 560.
- the CPU 510 also transmits data to an output device such as a display, a speaker, or a printer via the input/output interface 560.
- the input/output interface 560 may also function as a media interface that reads programs and the like recorded on a specific recording medium.
- media that can be used include optical recording media such as DVDs (Digital Versatile Discs) and PDs (Phase change rewritable Disks), magneto-optical recording media such as MOs (Magneto-Optical disks), tape media, magnetic recording media, and semiconductor memories.
- the CPU 510 of the computer 500 executes the information processing program loaded on the RAM 520 to realize all or part of the functions of the management index estimation unit 51, the voice recognition unit 53, the semantic analysis unit 54, the response generation unit 56, the response control unit 57, etc.
- the information processing program and data according to this embodiment are stored in the HDD 540. Note that the CPU 510 reads and executes the program data 541 from the HDD 540, but as another example, these programs may be obtained from other devices via the external network 580.
- the present technology can also be configured as follows. (1) a control index estimation unit that estimates a control index value related to a medical condition of the user based on voice information of the user; Information processing system. (2)
- the management index value is a management index value related to a medical condition when the disease of the user is a respiratory system disease.
- the control index estimation unit estimates the control index value based on the voice information and the biometric information of the user.
- the control index estimation unit calculates an acoustic feature from the speech information, and estimates the control index value based on the calculated acoustic feature.
- the information processing system according to any one of (1) to (3).
- the management index estimation unit estimates the management index value using a management index estimation model which is a learning model;
- a model generation unit that generates the management index estimation model, The information processing system according to (5) above.
- the model generation unit generates the management index estimation model using voice information for each patient having a different symptom related to the user's disease and a management index measurement value for each of the voice information.
- the model generation unit calculates acoustic features from the speech information, estimates a control index value from the calculated acoustic features, performs learning so as to minimize an error between the estimated control index value and the control index measurement value corresponding to the estimated control index value, and generates the control index estimation model.
- a management index database for storing the management index values is further provided.
- a response generating unit that generates response information regarding the medical condition of the user based on the management index value.
- the response information includes one or both of information regarding the user's current medical condition and information regarding the worsening of the user's medical condition.
- the information regarding the worsening of the user's condition includes information for suppressing the worsening of the user's condition.
- the response generation unit generates the response information based on the management index value and first response generation information.
- the information processing system includes information for generating the response information according to an intention of a user's utterance regarding the voice information.
- the information processing system according to (13) above.
- the system further includes a semantic analysis unit that analyzes the intention of the user utterance to generate the first response generation information.
- the information processing system according to (14) above.
- the response generation unit generates the response information based on the management index value, the first response generation information, and second response generation information different from the first response generation information.
- the information processing system according to any one of (13) to (15).
- the second response generation information includes one or both of environmental information about the user and schedule information of the user.
- the information processing system according to (16) above.
Landscapes
- Medical Informatics (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Medical Treatment And Welfare Office Work (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
An information processing system according to one embodiment of the present disclosure comprises a management index estimation unit that, on the basis of voice information of a user, estimates a management index value relating to the condition of a disease of the user.
Description
本開示は、情報処理システム、情報処理方法及び学習モデルの生成方法に関する。
This disclosure relates to an information processing system, an information processing method, and a method for generating a learning model.
喘息などの継続的に通院を行い、病状をコントロールする必要がある疾患では、患者の日常生活の中で病状を正しく把握し、病状の悪化を防ぐために適切な処置を行うことが重要である。現状では、喘息患者の日常での病状把握のためにピークフローメータと喘息日誌が用いられている。通常、喘息患者は、毎日数回ピークフローメータで測定を行い、その都度喘息日誌に記録をして日常生活での病状把握を行う。
For diseases such as asthma, which require continual visits to the hospital to control the condition, it is important to correctly understand the condition in the patient's daily life and take appropriate measures to prevent the condition from worsening. Currently, peak flow meters and asthma diaries are used by asthma patients to understand their condition on a daily basis. Typically, asthma patients measure their condition with a peak flow meter several times a day, recording each measurement in an asthma diary to understand their condition in their daily lives.
ピークフローメータは、医療保険で認可された取り扱いが簡単な安価な機器で、ピークフロー値を測定することができる。ピークフロー値は、呼気を力いっぱい吐き出したときの息の瞬間最大風速で、喘息の状態を客観的に把握することが可能な数値であり、医師には治療方針の確認や診断の参考情報として、患者には日常管理の指標として利用される。患者は、測定したピークフロー値と共に、発作の発生や服薬状況などの日常生活の状態を喘息日誌に記録する。
Peak flow meters are inexpensive, easy-to-use devices approved by medical insurance that can measure peak flow values. Peak flow values are the maximum instantaneous speed of airflow when breathing out with all one's might, and are a numerical value that makes it possible to objectively grasp the state of asthma. They are used by doctors as reference information for confirming treatment plans and diagnosis, and by patients as an indicator for daily management. Patients record the measured peak flow values, as well as the state of their daily life, such as the occurrence of attacks and medication status, in an asthma diary.
しかしながら、前述のような日常生活の病状管理方法は患者にとって負担となるため、患者は病状管理を継続的にできないことが多く、患者が日常的に病状を正しく知ることは困難である。例えば、ピークフローメータでピークフロー値を測定する時には力いっぱい息を吐き出ださないといけないため、患者にとっての身体的負担が大きい。また、測定の手間や測定を忘れるなどの理由により毎日継続して測定を続けることができない患者が多く、必要なデータを継続的に記録できないことが多々ある。
However, because the above-mentioned methods of managing disease in daily life place a burden on patients, patients are often unable to continue managing their condition, making it difficult for them to correctly know their condition on a daily basis. For example, when measuring peak flow values with a peak flow meter, patients must breathe out with all their might, which places a great physical burden on the patient. In addition, many patients are unable to continue measuring every day due to the effort required or forgetting to measure, and it is often not possible to continuously record the necessary data.
そこで、本開示では、ユーザの負担を抑えつつ病状を定量的に把握することが可能な情報処理システム、情報処理方法及び学習モデルの生成方法を提案する。
In this disclosure, we propose an information processing system, information processing method, and learning model generation method that can quantitatively grasp a disease condition while minimizing the burden on the user.
本開示の一形態に係る情報処理システムは、ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定する管理指標推定部を備える。
The information processing system according to one embodiment of the present disclosure includes a management index estimation unit that estimates a management index value related to a user's medical condition based on the user's voice information.
本開示の一形態に係る情報処理方法は、コンピュータが、ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定するものである。
In one embodiment of the information processing method disclosed herein, a computer estimates a management index value related to a user's medical condition based on the user's voice information.
本開示の一形態に係る学習モデルの生成方法は、コンピュータが、ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定するための管理指標推定モデルを生成するものである。
In one embodiment of the present disclosure, a method for generating a learning model involves a computer generating a management index estimation model for estimating a management index value related to a user's medical condition based on the user's voice information.
以下に本開示の実施形態について図面に基づいて詳細に説明する。実施形態は、実施例や変形例なども含む。なお、実施形態により本開示に係るシステムや装置、方法などが限定されるものではない。また、以下の実施形態において、基本的に同一の部位には同一の符号を付することにより重複する説明を省略する。
Below, an embodiment of the present disclosure will be described in detail with reference to the drawings. The embodiment includes examples and modified examples. Note that the embodiment does not limit the systems, devices, methods, etc. related to the present disclosure. Furthermore, in the following embodiments, essentially the same parts are designated by the same reference numerals, and duplicated explanations will be omitted.
以下の1または複数の実施形態は、各々が独立に実施されることが可能である。一方で、以下の複数の実施形態は少なくとも一部が他の実施形態の少なくとも一部と適宜組み合わされて実施されてもよい。これら複数の実施形態は、互いに異なる新規な特徴を含み得る。したがって、各実施形態は、互いに異なる目的または課題を解決することに寄与し得、互いに異なる効果を奏し得る。
The following one or more embodiments can be implemented independently. However, at least a portion of the following embodiments may be implemented in appropriate combination with at least a portion of the other embodiments. These embodiments may include novel features that are different from one another. Thus, each embodiment may contribute to solving a different purpose or problem, and may provide different effects.
以下に示す項目順序に従って本開示を説明する。
1.実施形態
1-1.情報処理システムの構成例
1-2.情報処理装置の一部の構成例
1-3.管理指標推定処理の一例
1-4.管理指標推定モデル生成処理の一例
1-5.応答処理の一例
1-6.情報処理システムの変形例
1-7.効果
2.他の実施形態
3.ハードウェアの構成例
4.付記 The present disclosure will be described in the following order.
1. Embodiment 1-1. Example of the configuration of an information processing system 1-2. Example of the configuration of a part of an information processing device 1-3. Example of a management index estimation process 1-4. Example of a management index estimation model generation process 1-5. Example of a response process 1-6. Modified example of an information processing system 1-7.Effects 2. Other embodiments 3. Example of the hardware configuration 4. Supplementary notes
1.実施形態
1-1.情報処理システムの構成例
1-2.情報処理装置の一部の構成例
1-3.管理指標推定処理の一例
1-4.管理指標推定モデル生成処理の一例
1-5.応答処理の一例
1-6.情報処理システムの変形例
1-7.効果
2.他の実施形態
3.ハードウェアの構成例
4.付記 The present disclosure will be described in the following order.
1. Embodiment 1-1. Example of the configuration of an information processing system 1-2. Example of the configuration of a part of an information processing device 1-3. Example of a management index estimation process 1-4. Example of a management index estimation model generation process 1-5. Example of a response process 1-6. Modified example of an information processing system 1-7.
<1.実施形態>
<1-1.情報処理システムの構成例>
本実施形態に係る情報処理システム1の構成例について図1を参照して説明する。図1は、本実施形態に係る情報処理システム1の構成例を示す図である。図1の例では、情報処理システム1は、疾患治療を支援する音声対話システムとして機能する。 1. Embodiment
<1-1. Example of information processing system configuration>
A configuration example of an information processing system 1 according to the present embodiment will be described with reference to Fig. 1. Fig. 1 is a diagram showing a configuration example of an information processing system 1 according to the present embodiment. In the example of Fig. 1, the information processing system 1 functions as a voice dialogue system that supports disease treatment.
<1-1.情報処理システムの構成例>
本実施形態に係る情報処理システム1の構成例について図1を参照して説明する。図1は、本実施形態に係る情報処理システム1の構成例を示す図である。図1の例では、情報処理システム1は、疾患治療を支援する音声対話システムとして機能する。 1. Embodiment
<1-1. Example of information processing system configuration>
A configuration example of an information processing system 1 according to the present embodiment will be described with reference to Fig. 1. Fig. 1 is a diagram showing a configuration example of an information processing system 1 according to the present embodiment. In the example of Fig. 1, the information processing system 1 functions as a voice dialogue system that supports disease treatment.
図1に示すように、情報処理システム1は、音入力部10と、生体情報検出部20と、音出力部30と、ユーザ端末40と、情報処理装置50とを備える。これらの音入力部10、生体情報検出部20、音出力部30、ユーザ端末40及び情報処理装置50の間では、各種情報の送受信が行われる。この送受信は、無線及び有線の両方又は一方の通信網や配線などを介して実行される。
As shown in FIG. 1, the information processing system 1 includes a sound input unit 10, a biometric information detection unit 20, a sound output unit 30, a user terminal 40, and an information processing device 50. Various information is transmitted and received between the sound input unit 10, the biometric information detection unit 20, the sound output unit 30, the user terminal 40, and the information processing device 50. This transmission and reception is performed via wireless and/or wired communication networks, wiring, etc.
音入力部10は、音声などの音を検出して情報処理装置50に入力する。例えば、音入力部10は、ユーザ発話を検出して情報処理装置50に入力する。音入力部10としては、例えば、マイクが用いられる。
The sound input unit 10 detects sounds such as voice and inputs them to the information processing device 50. For example, the sound input unit 10 detects user speech and inputs it to the information processing device 50. For example, a microphone is used as the sound input unit 10.
ユーザ発話は、ユーザが情報処理システム1に対し、応答を得るために話しかける音声である。例えば、ユーザは、「明日の東京の天気は?」、「今日の予定を教えて?」などを発話する。ユーザ発話に関する音声情報は、ユーザの音声情報の一例である。
User utterances are voices that a user speaks to the information processing system 1 to obtain a response. For example, a user may say, "What's the weather going to be like in Tokyo tomorrow?" or "What are your plans for today?" Voice information related to user utterances is an example of user voice information.
生体情報検出部20は、ユーザ生体情報を検出して情報処理装置50に入力する。生体情報検出部20としては、例えば、ウェアラブルデバイスが用いられる。ウェアラブルデバイスとしては、リストバンド型やネックバンド型、イヤフォン型などの各種のウェアラブルデバイスがある。
The biometric information detection unit 20 detects the user's biometric information and inputs it to the information processing device 50. For example, a wearable device is used as the biometric information detection unit 20. There are various types of wearable devices, such as wristband type, neckband type, and earphone type.
ユーザ生体情報は、ユーザから得られる生体情報である。このユーザ生体情報は、ユーザが情報処理システム1に対して事前に収集を許諾することで暗黙的に収集される。例えば、ユーザ生体情報は、ユーザが装着した生体情報検出部20により収集される心拍数、睡眠状態、運動量、脈拍、血圧、血流などを含む。
User biometric information is biometric information obtained from a user. This user biometric information is collected implicitly when the user gives permission to the information processing system 1 to collect the information in advance. For example, the user biometric information includes heart rate, sleep state, amount of exercise, pulse, blood pressure, blood flow, etc., collected by the biometric information detection unit 20 worn by the user.
音出力部30は、音声などの音を出力する。例えば、音出力部30は、応答情報などに基づく音声を出力する。音出力部30としては、例えば、スマートスピーカなどのスピーカが用いられる。
The sound output unit 30 outputs sounds such as voice. For example, the sound output unit 30 outputs sounds based on response information or the like. For example, a speaker such as a smart speaker is used as the sound output unit 30.
ユーザ端末40は、ユーザ用の端末である。ユーザ端末40は、表示や音などにより各種情報をユーザに提示する。ユーザ端末40としては、例えば、スマートフォンが用いられる。
The user terminal 40 is a terminal for a user. The user terminal 40 presents various information to the user through displays, sounds, etc. An example of the user terminal 40 is a smartphone.
情報処理装置50は、管理指標推定部51と、管理指標データベース52と、音声認識部53と、意味解析部54と、応答生成データベース55と、応答生成部56と、応答制御部57とを有する。
The information processing device 50 has a management index estimation unit 51, a management index database 52, a voice recognition unit 53, a semantic analysis unit 54, a response generation database 55, a response generation unit 56, and a response control unit 57.
管理指標推定部51は、ユーザ発話及びユーザ生体情報を解析することで、ユーザの病状に関する管理指標値を推定し、その管理指標値に関する管理指標情報を出力する。なお、管理指標推定部51は、ユーザ生体情報を得ることができない場合、ユーザ発話だけを解析することで管理指標値を推定してもよい。
The control index estimation unit 51 estimates a control index value related to the user's medical condition by analyzing the user's speech and the user's biometric information, and outputs control index information related to the control index value. Note that if the control index estimation unit 51 is unable to obtain user biometric information, it may estimate the control index value by analyzing only the user's speech.
管理指標値は、ユーザの病状を把握して管理するための客観的な数値である。また、管理指標情報は、管理指標推定部51によって推定された管理指標値を含む情報である。
The management index value is an objective numerical value for understanding and managing the user's medical condition. Furthermore, the management index information is information that includes the management index value estimated by the management index estimation unit 51.
管理指標データベース52は、管理指標推定部51から出力された管理指標情報(ユーザの管理指標値)を記録するデータベースである。
The management index database 52 is a database that records the management index information (user's management index value) output from the management index estimation unit 51.
ここで、喘息などの疾患の管理指標値としては、例えば、スパイロメータにより測定される努力性肺活量(FVC)、1秒量(FEV1)、1秒率(FEV1%)、予測値に対する1秒量(%FEV1)、ピークフロー(PEF)などがある。管理指標値としては、それらのいずれか一つ又は複数が用いられてもよい。
Here, examples of management index values for diseases such as asthma include forced vital capacity (FVC), forced expiratory volume in one second (FEV1), rate of expiratory volume in one second (FEV1%), predicted forced expiratory volume in one second (%FEV1), and peak flow (PEF), all of which are measured using a spirometer. Any one or more of these may be used as the management index value.
音声認識部53は、ユーザ発話の音声を発話文字列に変換する。発話文字列は、ユーザ発話の文字列である。
The voice recognition unit 53 converts the voice spoken by the user into a spoken string. The spoken string is a character string spoken by the user.
意味解析部54は、音声認識部53により生成された発話文字列を解析することで、応答生成部56が応答情報生成のために必要とする第1の応答生成情報を生成する。
The semantic analysis unit 54 analyzes the spoken string generated by the speech recognition unit 53 to generate the first response generation information required by the response generation unit 56 to generate response information.
第1の応答生成情報は、ユーザ発話の意図が意味解析部54により解析され、応答生成部56が応答情報を生成できるように形成された情報である。例えば、「今日の東京の天気は?」というユーザ発話に対する第1の応答生成情報は、「対象:天気、日時:今日、場所:東京」などである。
The first response generation information is information that is formed by analyzing the intention of the user's utterance by the semantic analysis unit 54 so that the response generation unit 56 can generate response information. For example, the first response generation information for a user utterance of "What's the weather in Tokyo today?" is "Subject: weather, date and time: today, location: Tokyo", etc.
応答生成データベース55は、応答生成部56が応答情報生成のために必要とする第2の応答生成情報を記録するデータベースである。この応答生成データベース55には、応答情報生成に必要な情報があらかじめ記録されている。
The response generation database 55 is a database that records the second response generation information required by the response generation unit 56 to generate response information. The response generation database 55 stores in advance the information required to generate response information.
第2の応答生成情報は、第1の応答生成情報と共に、応答生成部56が応答情報を生成できるように形成された情報である。例えば、第2の応答生成情報は、天気予報や大気汚染情報などのユーザに関する環境情報、また、ユーザのスケジュール情報などを含む。ユーザに関する環境情報とは、例えば、ユーザの自宅や仕事先、買物先などを含む生活圏の環境情報を含むが、ユーザの旅行先などの環境情報を含んでもよい。ユーザのスケジュール情報は、ユーザの予定として、用事内容や日時、場所などの情報を含む。
The second response generation information is information that is formed so that the response generation unit 56 can generate response information together with the first response generation information. For example, the second response generation information includes environmental information about the user, such as weather forecasts and air pollution information, as well as the user's schedule information. Environmental information about the user includes, for example, environmental information about the user's living area, including the user's home, workplace, shopping destinations, etc., but may also include environmental information about the user's travel destinations, etc. The user's schedule information includes information about the user's plans, such as the content of the errand, date and time, and location.
応答生成部56は、意味解析部54から入力された第1の応答生成情報、管理指標データベース52から入力された管理指標情報、さらに、応答生成データベース55から入力された第2の応答生成情報から、ユーザへの応答情報を生成する。
The response generation unit 56 generates response information for the user from the first response generation information input from the semantic analysis unit 54, the management index information input from the management index database 52, and the second response generation information input from the response generation database 55.
応答情報は、ユーザに応答するための各種情報を含む情報である。各種情報は、例えば、ユーザの発話内容に応じた情報、ユーザの現在の病状に関する情報やユーザの今後病状の悪化を防ぐための参考情報などを含む。例えば、ユーザ発話が「今日の東京の天気は?」である場合、応答情報は「今日は晴れです。昨日から喘息の状態が悪化傾向なので注意しましょう。」などである。
The response information includes various types of information for responding to the user. The various types of information include, for example, information according to the content of the user's utterance, information about the user's current condition, and reference information for preventing the user's condition from worsening in the future. For example, if the user utterance is "What's the weather in Tokyo today?", the response information may be "It's sunny today. Your asthma has been getting worse since yesterday, so be careful."
応答制御部57は、応答生成部56から入力された応答情報に基づいて、ユーザが利用する機器に合わせて情報を提供するシステム応答を行う。
The response control unit 57 performs a system response based on the response information input from the response generation unit 56, providing information tailored to the device used by the user.
例えば、応答制御部57は、ユーザが使用する機器がスマートスピーカなどの音入力部10である場合、音声で応答し、ユーザが使用する機器がスマートウォッチやスマートフォンなどのユーザ端末40である場合、テキストで通知などを行う。
For example, if the device used by the user is a sound input unit 10 such as a smart speaker, the response control unit 57 responds by voice, and if the device used by the user is a user terminal 40 such as a smart watch or smartphone, the response control unit 57 provides a notification by text.
ここで、上述の管理指標推定部51や音声認識部53、意味解析部54、応答生成部56、応答制御部57などの各機能部は、ハードウェア及びソフトウェアの両方又は一方により構成されてもよい。それらの構成は、特に限定されるものではない。例えば、前述の各機能部は、CPU(Central Processing Unit)やMPU(Micro Control Unit)などのコンピュータによって、ROMに予め記憶されたプログラムがRAMなどを作業領域として実行されることにより実現されてもよい。また、各機能部は、例えば、ASIC(Application Specific Integrated Circuit)やFPGA(Field-Programmable Gate Array)等の集積回路により実現されてもよい。
Here, each of the functional units, such as the above-mentioned management index estimation unit 51, speech recognition unit 53, semantic analysis unit 54, response generation unit 56, and response control unit 57, may be configured with both hardware and software, or one of them. Their configuration is not particularly limited. For example, each of the above-mentioned functional units may be realized by a computer, such as a CPU (Central Processing Unit) or MPU (Micro Control Unit), executing a program pre-stored in ROM using a RAM or the like as a working area. Furthermore, each of the functional units may be realized with an integrated circuit, such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field-Programmable Gate Array).
<1-2.情報処理装置の一部の構成例>
本実施形態に係る情報処理装置50の一部の構成例について図2を参照して説明する。図2は、本実施形態に係る情報処理装置50の一部の構成例を示す図である。 <1-2. Example of a Part of the Configuration of an Information Processing Device>
An example of the configuration of a portion of theinformation processing device 50 according to this embodiment will be described with reference to Fig. 2. Fig. 2 is a diagram showing an example of the configuration of a portion of the information processing device 50 according to this embodiment.
本実施形態に係る情報処理装置50の一部の構成例について図2を参照して説明する。図2は、本実施形態に係る情報処理装置50の一部の構成例を示す図である。 <1-2. Example of a Part of the Configuration of an Information Processing Device>
An example of the configuration of a portion of the
図2に示すように、管理指標推定部51は、管理指標推定モデル58aに基づいて管理指標値を推定する。この管理指標推定モデル58aは、モデル生成部58により生成される。
As shown in FIG. 2, the management index estimation unit 51 estimates the management index value based on the management index estimation model 58a. This management index estimation model 58a is generated by the model generation unit 58.
モデル生成部58は、例えば、発話音声データベース58b及び管理指標測定値データベース58cに基づいて、機械学習により管理指標推定モデル58aを生成する。管理指標推定モデル58aは、例えば、事前に収集されたデータに対して回帰分析を行うモデルである。
The model generation unit 58 generates a management index estimation model 58a by machine learning, for example, based on the speech voice database 58b and the management index measurement value database 58c. The management index estimation model 58a is, for example, a model that performs regression analysis on data collected in advance.
このようなモデル生成部58は、情報処理装置50に設けられていてもよく、あるいは、情報処理装置50以外の他の装置に設けられてもよい。
Such a model generation unit 58 may be provided in the information processing device 50, or may be provided in a device other than the information processing device 50.
<1-3.管理指標推定処理の一例>
本実施形態に係る管理指標推定処理の一例について図3及び図4を参照して説明する。図3は、本実施形態に係る管理指標推定処理の一例を示すフローチャートである。図4は、本実施形態に係る管理指標推定処理の一例を説明するための図である。 <1-3. Example of management index estimation process>
An example of the management index estimation process according to the present embodiment will be described with reference to Fig. 3 and Fig. 4. Fig. 3 is a flowchart showing an example of the management index estimation process according to the present embodiment. Fig. 4 is a diagram for explaining an example of the management index estimation process according to the present embodiment.
本実施形態に係る管理指標推定処理の一例について図3及び図4を参照して説明する。図3は、本実施形態に係る管理指標推定処理の一例を示すフローチャートである。図4は、本実施形態に係る管理指標推定処理の一例を説明するための図である。 <1-3. Example of management index estimation process>
An example of the management index estimation process according to the present embodiment will be described with reference to Fig. 3 and Fig. 4. Fig. 3 is a flowchart showing an example of the management index estimation process according to the present embodiment. Fig. 4 is a diagram for explaining an example of the management index estimation process according to the present embodiment.
図3に示すように、ステップS11において、管理指標推定部51は、音入力部10から入力されたユーザ発話を取得する。ステップS12において、管理指標推定部51は、取得したユーザ発話から音響特徴量を算出する。ステップS13において、管理指標推定部51は、管理指標推定モデル58aを用いて前述の音響特徴量から管理指標値を推定する。ステップS14において、管理指標推定部51は、推定した管理指標値、すなわち管理指標推定値を管理指標データベース52に出力し、処理を終了する。
As shown in FIG. 3, in step S11, the control index estimation unit 51 acquires the user's speech input from the sound input unit 10. In step S12, the control index estimation unit 51 calculates acoustic features from the acquired user's speech. In step S13, the control index estimation unit 51 estimates a control index value from the aforementioned acoustic features using the control index estimation model 58a. In step S14, the control index estimation unit 51 outputs the estimated control index value, i.e., the control index estimated value, to the control index database 52, and ends the process.
このような処理において、管理指標推定部51は、ユーザの測定負荷を軽減し、ユーザが情報処理システム1を使っているときに暗黙的に病状を把握するために、ユーザ発話、すなわちユーザ発話音声から管理指標値の推定を行う。
In this process, the management index estimation unit 51 estimates the management index value from the user's speech, i.e., the user's speech, in order to reduce the measurement load on the user and to implicitly grasp the user's condition while using the information processing system 1.
図4に示すように、具体的には、管理指標推定部51は、ユーザ発話音声を音響特徴量算出処理により処理し、音響特徴量を算出する。
As shown in FIG. 4, specifically, the management index estimation unit 51 processes the user's speech through an acoustic feature calculation process to calculate the acoustic feature.
音響特徴量は、音の特徴を表現した数値(ベクトル)である。音響特徴量としては、例えば、MFCC(メル周波数ケプストラム係数)、zero cross、spectral centroid、spectral flatness、spectral rolloffなどが用いられる。
An acoustic feature is a numerical value (vector) that represents the characteristics of a sound. Examples of acoustic features include MFCC (Mel Frequency Cepstral Coefficients), zero cross, spectral centroid, spectral flatness, and spectral rolloff.
管理指標推定部51は、前述の音響特徴量と、事前に学習により得られた管理指標推定モデル58aとを用いて管理指標値推定処理を行うことで、管理指標値を推定する。これにより、管理指標値、すなわち管理指標推定値が得られる。
The control index estimation unit 51 estimates the control index value by performing a control index value estimation process using the above-mentioned acoustic feature amount and the control index estimation model 58a obtained by prior learning. This allows the control index value, i.e., the control index estimated value, to be obtained.
管理指標推定処理は、例えば、管理指標推定モデル58aを用いて、回帰により管理指標推定値を算出する処理である。管理指標推定モデル58aは、例えば、事前に収集されたデータに対して音響特徴量を説明変数とし、管理指標値を目的変数として回帰分析を行うモデルである。この管理指標推定モデル58aは、機械学習などによりあらかじめ生成されている。
The control index estimation process is a process of calculating a control index estimate value by regression using, for example, control index estimation model 58a. Control index estimation model 58a is, for example, a model that performs regression analysis on data collected in advance using acoustic features as explanatory variables and control index values as objective variables. This control index estimation model 58a is generated in advance by machine learning or the like.
<1-4.管理指標推定モデル生成処理の一例>
本実施形態に係る管理指標推定モデル生成処理の一例について図5及び図6を参照して説明する。図5は、本実施形態に係る管理指標推定モデル生成処理の一例を示すフローチャートである。図6は、本実施形態に係る管理指標推定モデル生成処理の一例を説明するための図である。 <1-4. Example of management index estimation model generation process>
An example of the management index estimation model generation process according to the present embodiment will be described with reference to Fig. 5 and Fig. 6. Fig. 5 is a flowchart showing an example of the management index estimation model generation process according to the present embodiment. Fig. 6 is a diagram for explaining an example of the management index estimation model generation process according to the present embodiment.
本実施形態に係る管理指標推定モデル生成処理の一例について図5及び図6を参照して説明する。図5は、本実施形態に係る管理指標推定モデル生成処理の一例を示すフローチャートである。図6は、本実施形態に係る管理指標推定モデル生成処理の一例を説明するための図である。 <1-4. Example of management index estimation model generation process>
An example of the management index estimation model generation process according to the present embodiment will be described with reference to Fig. 5 and Fig. 6. Fig. 5 is a flowchart showing an example of the management index estimation model generation process according to the present embodiment. Fig. 6 is a diagram for explaining an example of the management index estimation model generation process according to the present embodiment.
図5に示すように、ステップS21において、モデル生成部58は、発話音声データベース58bから患者の発話音声データを取得し、その発話音声データに対応する管理指標測定値を管理指標測定値データベース58cから取得する。ステップS22において、モデル生成部58は、取得した発話音声データから音響特徴量を算出する。ステップS23において、モデル生成部58は、算出した音響特徴量及び管理指標測定値を用いて、管理指標推定モデル58aを生成するモデル学習を行う。ステップS24において、モデル生成部58は、生成した管理指標推定モデル58aを保存する。
As shown in FIG. 5, in step S21, the model generation unit 58 acquires the patient's speech data from the speech database 58b, and acquires the management index measurement values corresponding to the speech data from the management index measurement value database 58c. In step S22, the model generation unit 58 calculates acoustic features from the acquired speech data. In step S23, the model generation unit 58 performs model learning to generate a management index estimation model 58a using the calculated acoustic features and management index measurement values. In step S24, the model generation unit 58 stores the generated management index estimation model 58a.
このような処理において、モデル生成部58は、管理指標推定モデル58aをあらかじめ生成する。なお、ユーザ発話とそのユーザ発話に対応する推定の管理指標値を用いる再学習を管理指標推定モデル58aに実行させ、管理指標推定モデル58aを更新してもよい。
In such processing, the model generation unit 58 generates the management index estimation model 58a in advance. Note that the management index estimation model 58a may be caused to execute re-learning using a user utterance and an estimated management index value corresponding to the user utterance, thereby updating the management index estimation model 58a.
図6に示すように、具体的には、モデル生成部58は、学習のために事前に様々な症状の喘息患者の発話音声データ(音声情報)と、その発話音声データに対応する管理指標測定値をペアで収集する。発話音声データに対応する管理指標測定値とは、例えば、該当の発話音声データを取得した時にスパイロメトリーにより測定された管理指標測定値である。症状が異なる患者ごとの発話音声データと、その発話音声データごとの管理指標測定値とが事前に用意される。
As shown in FIG. 6, specifically, the model generation unit 58 collects in advance, for learning purposes, pairs of speech data (speech information) from asthma patients with various symptoms and the control index measurement values corresponding to the speech data. The control index measurement values corresponding to the speech data are, for example, control index measurement values measured by spirometry when the corresponding speech data is acquired. Speech data for patients with different symptoms and control index measurement values for each of the speech data are prepared in advance.
モデル生成部58は、発話音声データから音響特徴量を算出し、その音響特徴量から管理指標値を推定し、推定した管理指標値と、その管理指標値に対応する管理指標測定値との誤差が最小となるように学習を行い、管理指標推定モデル58aを生成する。
The model generation unit 58 calculates acoustic features from the speech data, estimates a control index value from the acoustic features, and performs learning so as to minimize the error between the estimated control index value and the control index measurement value corresponding to that control index value, thereby generating a control index estimation model 58a.
(ユーザ発話)
本実施形態では、ユーザの発話音声を解析の対象としているが、システムが収音可能ならば咳音や呼吸音も解析対象となりえる。このため、咳音や呼吸音も発話音声に含まれる。また、ユーザ生体情報として、システムが取得可能ならば心拍、呼吸数などのバイタルサインや顔画像などの情報も、管理指標値推定のための解析対象として想定される。 (User utterance)
In this embodiment, the user's speech is analyzed, but if the system can collect the sounds, coughing and breathing sounds can also be analyzed. Therefore, coughing and breathing sounds are included in the speech. In addition, if the system can acquire user biometric information, vital signs such as heart rate and respiratory rate, and facial images can also be analyzed to estimate the management index value.
本実施形態では、ユーザの発話音声を解析の対象としているが、システムが収音可能ならば咳音や呼吸音も解析対象となりえる。このため、咳音や呼吸音も発話音声に含まれる。また、ユーザ生体情報として、システムが取得可能ならば心拍、呼吸数などのバイタルサインや顔画像などの情報も、管理指標値推定のための解析対象として想定される。 (User utterance)
In this embodiment, the user's speech is analyzed, but if the system can collect the sounds, coughing and breathing sounds can also be analyzed. Therefore, coughing and breathing sounds are included in the speech. In addition, if the system can acquire user biometric information, vital signs such as heart rate and respiratory rate, and facial images can also be analyzed to estimate the management index value.
(音響特徴量)
本実施形態では、音響特徴量として、12次元のMFCC(メル周波数ケプストラム係数)を入力データに対して一定の時間フレーム毎に算出しその平均を用いているが、中央値、最大値、最小値、標準偏差、歪度、尖度等の統計量や、mel spectrogram、chroma vector、zero cross、spectral centroid、spectral flatness、spectral rolloffなどの音響特徴量の利用も想定される。 (Acoustic features)
In this embodiment, 12-dimensional MFCC (Mel Frequency Cepstral Coefficients) are calculated for input data at regular time frames, and the average of the calculated values is used as the acoustic feature. However, it is also possible to use other acoustic features, such as statistics such as the median, maximum value, minimum value, standard deviation, skewness, and kurtosis, as well as mel spectrogram, chroma vector, zero cross, spectral centroid, spectral flatness, and spectral rolloff.
本実施形態では、音響特徴量として、12次元のMFCC(メル周波数ケプストラム係数)を入力データに対して一定の時間フレーム毎に算出しその平均を用いているが、中央値、最大値、最小値、標準偏差、歪度、尖度等の統計量や、mel spectrogram、chroma vector、zero cross、spectral centroid、spectral flatness、spectral rolloffなどの音響特徴量の利用も想定される。 (Acoustic features)
In this embodiment, 12-dimensional MFCC (Mel Frequency Cepstral Coefficients) are calculated for input data at regular time frames, and the average of the calculated values is used as the acoustic feature. However, it is also possible to use other acoustic features, such as statistics such as the median, maximum value, minimum value, standard deviation, skewness, and kurtosis, as well as mel spectrogram, chroma vector, zero cross, spectral centroid, spectral flatness, and spectral rolloff.
(管理指標値)
本実施形態では、管理指標値として、例えば、ピークフロー(PEF)の使用が想定されるが、努力性肺活量(FVC)、1秒量(FEV1)、1秒率(FEV1%=FEV1/FVC)、予測値に対する1秒量(%FEV1)などの利用も想定される。なお、管理指標値の優先順位は、(1)1秒率、(2)予測値に対する1秒量、(3)ピークフロー、(4)努力性肺活量、(5)予測値に対する努力性肺活量、(5)1秒量である。括弧内の数字が小さい方が、優先順位が高い。この優先順位とは、例えば、病状を把握するための重要度である。 (Control index value)
In this embodiment, the use of, for example, peak flow (PEF) is assumed as the control index value, but the use of forced vital capacity (FVC), forced expiratory volume in one second (FEV1), rate in one second (FEV1%=FEV1/FVC), and forced expiratory volume in one second relative to a predicted value (%FEV1) are also assumed. The priority order of the control index values is as follows: (1) rate in one second, (2) forced expiratory volume in one second relative to a predicted value, (3) peak flow, (4) forced vital capacity, (5) forced vital capacity relative to a predicted value, and (5) forced expiratory volume in one second. The smaller the number in parentheses, the higher the priority order. This priority order is, for example, the importance of understanding the condition.
本実施形態では、管理指標値として、例えば、ピークフロー(PEF)の使用が想定されるが、努力性肺活量(FVC)、1秒量(FEV1)、1秒率(FEV1%=FEV1/FVC)、予測値に対する1秒量(%FEV1)などの利用も想定される。なお、管理指標値の優先順位は、(1)1秒率、(2)予測値に対する1秒量、(3)ピークフロー、(4)努力性肺活量、(5)予測値に対する努力性肺活量、(5)1秒量である。括弧内の数字が小さい方が、優先順位が高い。この優先順位とは、例えば、病状を把握するための重要度である。 (Control index value)
In this embodiment, the use of, for example, peak flow (PEF) is assumed as the control index value, but the use of forced vital capacity (FVC), forced expiratory volume in one second (FEV1), rate in one second (FEV1%=FEV1/FVC), and forced expiratory volume in one second relative to a predicted value (%FEV1) are also assumed. The priority order of the control index values is as follows: (1) rate in one second, (2) forced expiratory volume in one second relative to a predicted value, (3) peak flow, (4) forced vital capacity, (5) forced vital capacity relative to a predicted value, and (5) forced expiratory volume in one second. The smaller the number in parentheses, the higher the priority order. This priority order is, for example, the importance of understanding the condition.
また、呼吸器系疾患としては、喘息(気管支喘息)があるが、喘息以外の呼吸器系疾患としては、例えば、慢性閉塞性肺疾患(COPD)や肺線維症(Pulmonary Fibrosis)などもある。慢性閉塞性肺疾患の管理指標値は、例えば、1秒率(FEV1%)である。肺線維症の管理指標値は、例えば、努力性肺活量(FVC)である。また、好酸球性多発血管炎性肉芽腫症(EGPA)、サルコイドーシス(肺)、膠原病肺、転移性肺腫瘍、気管支拡張症、気道異物などの各種疾患もある。各種疾患に応じて管理指標値は適宜設定される。
Respiratory diseases include asthma (bronchial asthma), but other respiratory diseases include chronic obstructive pulmonary disease (COPD) and pulmonary fibrosis. The management index value for chronic obstructive pulmonary disease is, for example, forced expiratory volume in 1 second (FEV1%). The management index value for pulmonary fibrosis is, for example, forced vital capacity (FVC). There are also various diseases such as eosinophilic granulomatosis with polyangiitis (EGPA), sarcoidosis (lung), collagen disease of the lung, metastatic lung tumor, bronchiectasis, and foreign bodies in the airway. The management index value is set appropriately according to the type of disease.
なお、管理指標値(管理指標推定値)は、管理指標推定部51により推定されて求められ、ユーザごとに管理指標データベース52に保存される。例えば、管理指標値はユーザの発話ごとに保存され、ユーザごとに管理される。このような管理指標情報は、ユーザの担当医や家族などにより閲覧可能に構成されてもよい。例えば、ユーザの管理指標情報は、当該ユーザの担当医や家族などの端末からのアクセスに応じて、その端末に適宜送信されてもよい。
The management index value (management index estimated value) is estimated by the management index estimation unit 51 and stored in the management index database 52 for each user. For example, the management index value is stored for each user utterance and managed for each user. Such management index information may be configured to be viewable by the user's doctor, family, etc. For example, the user's management index information may be appropriately transmitted to a terminal of the user's doctor, family, etc. in response to access from that terminal.
(モデル学習)
本実施形態では、回帰モデルとして、線形回帰を用いているが、勾配ブースティング決定木、サポートベクター回帰、ディープラーニングによるモデル学習などの利用も想定される。また、本実施形態では、システムが事前に収集したデータにより学習を行い、それにより管理指標値の推定を行っているが、ユーザがサービスを利用した時に入力された音声を利用した再学習によるモデルの個人最適化やモデル全体の精度を向上する機能の導入も想定される。 (Model learning)
In this embodiment, linear regression is used as the regression model, but it is also assumed that gradient boosting decision tree, support vector regression, model learning by deep learning, etc. are also used. In this embodiment, the system learns using data collected in advance and estimates the management index value based on the learned model, but it is also assumed that a function for personal optimization of the model by re-learning using the voice input when the user uses the service and for improving the accuracy of the entire model are introduced.
本実施形態では、回帰モデルとして、線形回帰を用いているが、勾配ブースティング決定木、サポートベクター回帰、ディープラーニングによるモデル学習などの利用も想定される。また、本実施形態では、システムが事前に収集したデータにより学習を行い、それにより管理指標値の推定を行っているが、ユーザがサービスを利用した時に入力された音声を利用した再学習によるモデルの個人最適化やモデル全体の精度を向上する機能の導入も想定される。 (Model learning)
In this embodiment, linear regression is used as the regression model, but it is also assumed that gradient boosting decision tree, support vector regression, model learning by deep learning, etc. are also used. In this embodiment, the system learns using data collected in advance and estimates the management index value based on the learned model, but it is also assumed that a function for personal optimization of the model by re-learning using the voice input when the user uses the service and for improving the accuracy of the entire model are introduced.
<1-5.応答処理の一例>
本実施形態に係る応答処理の一例について図7を参照して説明する。図7は、本実施形態に係る応答処理の一例を示すフローチャートである。 <1-5. An example of response processing>
An example of the response process according to this embodiment will be described with reference to Fig. 7. Fig. 7 is a flowchart showing an example of the response process according to this embodiment.
本実施形態に係る応答処理の一例について図7を参照して説明する。図7は、本実施形態に係る応答処理の一例を示すフローチャートである。 <1-5. An example of response processing>
An example of the response process according to this embodiment will be described with reference to Fig. 7. Fig. 7 is a flowchart showing an example of the response process according to this embodiment.
図7に示すように、ステップS31において、応答生成部56が意味解析部54から応答生成情報(第1の応答生成情報)を取得する。ステップS32において、応答生成部56が管理指標データベース52を参照し、管理指標値を利用可能であるか否かを判断する。
As shown in FIG. 7, in step S31, the response generation unit 56 acquires response generation information (first response generation information) from the semantic analysis unit 54. In step S32, the response generation unit 56 refers to the management index database 52 and determines whether the management index value is available.
ステップS32において、応答生成部56が管理指標値を利用可能であると判断すると(ステップS32のYes)、ステップS33において、応答生成データベース55を参照し、応答生成データベース55に利用可能な増悪に関する情報があるか否かを判断する。
If the response generation unit 56 determines in step S32 that the management index value is available (Yes in step S32), in step S33, it refers to the response generation database 55 and determines whether there is available information regarding exacerbation in the response generation database 55.
一方、ステップS32において、応答生成部56が管理指標値を利用可能でないと判断すると(ステップS32のNo)、ステップS36において、通常の応答情報を生成し、処理を終了する。
On the other hand, if the response generation unit 56 determines in step S32 that the management index value is not available (No in step S32), normal response information is generated in step S36, and the process ends.
ステップS33において、応答生成部56が応答生成データベース55に利用可能な増悪に関する情報があると判断すると(ステップS33のYes)、ステップS34において、病状の増悪(病状の悪化)に関する情報を含む応答情報を生成し、処理を終了する。
In step S33, if the response generation unit 56 determines that there is available information regarding exacerbation in the response generation database 55 (Yes in step S33), in step S34, response information including information regarding the exacerbation (worsening of the condition) is generated, and the process ends.
一方、ステップS33において、応答生成部56が応答生成データベース55に利用可能な増悪に関する情報がないと判断すると(ステップS33のNo)、ステップS35において、病状に関する情報を含む応答情報を生成し、処理を終了する。
On the other hand, if in step S33 the response generation unit 56 determines that there is no information regarding exacerbation available in the response generation database 55 (No in step S33), in step S35 it generates response information including information regarding the condition, and ends the process.
このような処理によれば、応答生成部56は、意味解析部54から入力された第1の応答生成情報、管理指標データベース52から入力された管理指標情報、応答生成データベース55から入力された第2の応答生成情報に基づいて、ユーザへの応答情報を生成する。
According to this processing, the response generation unit 56 generates response information for the user based on the first response generation information input from the semantic analysis unit 54, the management index information input from the management index database 52, and the second response generation information input from the response generation database 55.
(通常の応答)
応答生成部56は、管理指標データベース52から管理指標値を取得できない場合、通常の応答情報を生成する。例えば、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。」という応答情報を生成する。 (Normal response)
The response generating unit 56 generates normal response information when it is unable to acquire a management index value from the management index database 52. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates the response information "It's sunny today."
応答生成部56は、管理指標データベース52から管理指標値を取得できない場合、通常の応答情報を生成する。例えば、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。」という応答情報を生成する。 (Normal response)
The response generating unit 56 generates normal response information when it is unable to acquire a management index value from the management index database 52. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates the response information "It's sunny today."
(病状に関する応答)
応答生成部56は、管理指標データベース52から管理指標値を取得できる場合、通常の応答情報と合わせて病状に関する情報を含む応答情報を生成する。例えば、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。現在の喘息の状態は良好です。」という応答情報を生成する。 (Response regarding medical condition)
When the response generating unit 56 can acquire a management index value from the management index database 52, the response generating unit 56 generates response information including information on the medical condition in addition to normal response information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information saying "It's sunny today. Your current asthma condition is good."
応答生成部56は、管理指標データベース52から管理指標値を取得できる場合、通常の応答情報と合わせて病状に関する情報を含む応答情報を生成する。例えば、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。現在の喘息の状態は良好です。」という応答情報を生成する。 (Response regarding medical condition)
When the response generating unit 56 can acquire a management index value from the management index database 52, the response generating unit 56 generates response information including information on the medical condition in addition to normal response information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information saying "It's sunny today. Your current asthma condition is good."
また、応答生成部56は、管理指標データベース52から直近の連続的な情報を取得できる場合、その直近の連続的な情報の変化に応じた応答情報を生成してもよい。例えば、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。昨日から喘息の状態が悪化傾向なので注意しましょう。」という応答情報を生成する。
In addition, when the most recent continuous information can be acquired from the management index database 52, the response generation unit 56 may generate response information according to changes in the most recent continuous information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generation unit 56 generates response information such as "It's sunny today. Your asthma condition has been worsening since yesterday, so be careful."
(増悪に関する応答)
応答生成部56は、管理指標値と合わせて、ユーザの予定や花粉、大気汚染などの環境情報を取得できる場合、通常の応答情報と合わせて増悪を抑えるための情報を含む応答情報を生成する。例えば、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。現在の喘息の状態は良好ですが花粉が多く散布するため外出の際は注意しましょう。」という応答情報を生成する。また、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。19:00から会食の予定ですが、昨日から喘息の状態が悪化傾向のため、過度な飲酒には注意しましょう。」という応答情報を生成する。 (Responses regarding exacerbations)
When the response generating unit 56 can obtain environmental information such as the user's schedule, pollen, and air pollution together with the management index value, the response generating unit 56 generates response information including information for suppressing exacerbation together with normal response information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information such as "It's sunny today. Your current asthma condition is good, but there is a lot of pollen scattered, so be careful when going out." Also, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information such as "It's sunny today. We're planning to have dinner from 19:00, but your asthma condition has been getting worse since yesterday, so be careful not to drink too much alcohol."
応答生成部56は、管理指標値と合わせて、ユーザの予定や花粉、大気汚染などの環境情報を取得できる場合、通常の応答情報と合わせて増悪を抑えるための情報を含む応答情報を生成する。例えば、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。現在の喘息の状態は良好ですが花粉が多く散布するため外出の際は注意しましょう。」という応答情報を生成する。また、応答生成部56は、ユーザ発話が「今日の東京の天気は?」である場合、「今日は晴れです。19:00から会食の予定ですが、昨日から喘息の状態が悪化傾向のため、過度な飲酒には注意しましょう。」という応答情報を生成する。 (Responses regarding exacerbations)
When the response generating unit 56 can obtain environmental information such as the user's schedule, pollen, and air pollution together with the management index value, the response generating unit 56 generates response information including information for suppressing exacerbation together with normal response information. For example, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information such as "It's sunny today. Your current asthma condition is good, but there is a lot of pollen scattered, so be careful when going out." Also, when the user utterance is "What's the weather in Tokyo today?", the response generating unit 56 generates response information such as "It's sunny today. We're planning to have dinner from 19:00, but your asthma condition has been getting worse since yesterday, so be careful not to drink too much alcohol."
(スマートスピーカやAIエージェントによる情報提供)
音声による情報提供を基本とした情報処理システム1では、ユーザへの応答は音声による提供を行う。例えば、音出力部30などが用いられる。なお、情報処理システム1への音声入力においてもユーザ発話が基本となるため、音入力部10による音声入力を用いて管理指標値の推定を行う。 (Information provided by smart speakers and AI agents)
In the information processing system 1 based on providing information by voice, responses to the user are provided by voice, for example, using asound output unit 30. Note that, since voice input to the information processing system 1 is also based on user speech, the management index value is estimated using voice input by the sound input unit 10.
音声による情報提供を基本とした情報処理システム1では、ユーザへの応答は音声による提供を行う。例えば、音出力部30などが用いられる。なお、情報処理システム1への音声入力においてもユーザ発話が基本となるため、音入力部10による音声入力を用いて管理指標値の推定を行う。 (Information provided by smart speakers and AI agents)
In the information processing system 1 based on providing information by voice, responses to the user are provided by voice, for example, using a
(スマートフォンやウェアラブルデバイスによる情報提供)
画面表示による情報提供を基本とした情報処理システム1では、ユーザへの応答は画面へのメッセージ通知による提供を行ってもよい。例えば、ユーザ端末40などが用いられる。なお、情報処理システム1への音声入力に音入力部10を用い、画面表示による情報提供にユーザ端末40を用いるが、例えば、情報処理システム1への音声入力にスマートフォンなどのユーザ端末40を用いてもよい。この場合、ユーザ端末40が音入力部10として機能する。 (Information provided via smartphones and wearable devices)
In the information processing system 1 based on providing information by displaying a screen, responses to the user may be provided by notifying a message on the screen. For example, auser terminal 40 or the like is used. Note that the sound input unit 10 is used for inputting voice to the information processing system 1, and the user terminal 40 is used for providing information by displaying a screen, but for example, the user terminal 40 such as a smartphone may be used for inputting voice to the information processing system 1. In this case, the user terminal 40 functions as the sound input unit 10.
画面表示による情報提供を基本とした情報処理システム1では、ユーザへの応答は画面へのメッセージ通知による提供を行ってもよい。例えば、ユーザ端末40などが用いられる。なお、情報処理システム1への音声入力に音入力部10を用い、画面表示による情報提供にユーザ端末40を用いるが、例えば、情報処理システム1への音声入力にスマートフォンなどのユーザ端末40を用いてもよい。この場合、ユーザ端末40が音入力部10として機能する。 (Information provided via smartphones and wearable devices)
In the information processing system 1 based on providing information by displaying a screen, responses to the user may be provided by notifying a message on the screen. For example, a
以上のように、本実施形態によれば、情報処理システム1である音声対話システムを使うユーザに対して、そのシステムへのユーザ発話音声を使って解析を行って病状を推定することで、ユーザの負担を抑えつつ病状を定量的に把握することができる。また、システムがユーザ発話音声から推定した病状や、ユーザの予測される行動情報、天候などの環境情報を元に、病状の悪化を防ぐための情報を生成し、ユーザに分かりやすく提示することもできる。したがって、ユーザはシステムとの音声対話により無意識の内に身体的負担なく病状を測定することができ、また、システムから病状の悪化を防ぐために必要な情報提示を受けることができる。従来、喘息日誌に記載があったとしても、患者自身でその情報から病状の悪化を防ぐためにどうすれば良いかを判断することが困難であり、次回通院して医師による診断結果を聞くまでには現状を維持するしかないのが実情である。
As described above, according to this embodiment, the condition of a user who uses the voice dialogue system, which is the information processing system 1, can be quantitatively understood while reducing the burden on the user by estimating the condition of the disease by performing analysis using the user's speech to the system. In addition, the system can generate information to prevent the condition from worsening based on the condition estimated from the user's speech, the user's predicted behavior information, and environmental information such as weather, and present it to the user in an easy-to-understand manner. Therefore, the user can measure the condition of the disease unconsciously and without physical burden through voice dialogue with the system, and can receive information required to prevent the condition from worsening from the system. Conventionally, even if an asthma diary was written, it was difficult for the patient to determine from the information what to do to prevent the condition from worsening, and the reality is that the patient has no choice but to maintain the current condition until the next visit to the hospital to hear the doctor's diagnosis.
<1-6.情報処理システムの変形例>
本実施形態に係る情報処理システム1の変形例について図8から図11を参照して説明する。図8から図11は、本実施形態の変形例に係る各情報処理システム1A~1Dの構成例を示す図である。 <1-6. Modified examples of information processing system>
Modifications of the information processing system 1 according to the present embodiment will be described with reference to Fig. 8 to Fig. 11. Fig. 8 to Fig. 11 are diagrams showing configuration examples of theinformation processing systems 1A to 1D according to the modifications of the present embodiment.
本実施形態に係る情報処理システム1の変形例について図8から図11を参照して説明する。図8から図11は、本実施形態の変形例に係る各情報処理システム1A~1Dの構成例を示す図である。 <1-6. Modified examples of information processing system>
Modifications of the information processing system 1 according to the present embodiment will be described with reference to Fig. 8 to Fig. 11. Fig. 8 to Fig. 11 are diagrams showing configuration examples of the
図8に示すように、情報処理システム1Aは、ウェアラブルデバイス110と、スマートフォン120と、サーバ200とを備える。ウェアラブルデバイス110及びスマートフォン120はサーバ200と通信可能に構成されている。ウェアラブルデバイス110で検出されたデータは、直接サーバ200に送信される。
As shown in FIG. 8, the information processing system 1A includes a wearable device 110, a smartphone 120, and a server 200. The wearable device 110 and the smartphone 120 are configured to be able to communicate with the server 200. Data detected by the wearable device 110 is sent directly to the server 200.
図9に示すように、情報処理システム1Bは、前述の情報処理システム1Aと同様、ウェアラブルデバイス110と、スマートフォン120と、サーバ200とを備える。ウェアラブルデバイス110はスマートフォン120と通信可能に構成されており、スマートフォン120はサーバ200と通信可能に構成されている。ウェアラブルデバイス110で検出されたデータは、いったんスマートフォン120に蓄積されてからサーバ200に送信される。
As shown in FIG. 9, information processing system 1B, like information processing system 1A described above, includes wearable device 110, smartphone 120, and server 200. Wearable device 110 is configured to be able to communicate with smartphone 120, and smartphone 120 is configured to be able to communicate with server 200. Data detected by wearable device 110 is first stored in smartphone 120 and then transmitted to server 200.
このような図8及び図9において、ウェアラブルデバイス110は前述の生体情報検出部20に相当し、スマートフォン120は前述の音入力部10や音出力部30などに相当する。つまり、スマートフォン120は音入力部10や音出力部30などを含む。サーバ200は情報処理装置50に相当する。
In Figs. 8 and 9, the wearable device 110 corresponds to the biometric information detection unit 20 described above, and the smartphone 120 corresponds to the sound input unit 10 and sound output unit 30 described above. In other words, the smartphone 120 includes the sound input unit 10 and sound output unit 30. The server 200 corresponds to the information processing device 50.
図10に示すように、情報処理システム1Cは、ウェアラブルデバイス110と、スマートフォン120と、サーバ200と、サービス提供者サーバ300とを備える。ウェアラブルデバイス110及びスマートフォン120はサーバ200と通信可能に構成されている。サーバ200はサービス提供者サーバ300と通信可能に構成されている。ウェアラブルデバイス110で検出されたデータは、直接サーバ200に送信される。
As shown in FIG. 10, the information processing system 1C includes a wearable device 110, a smartphone 120, a server 200, and a service provider server 300. The wearable device 110 and the smartphone 120 are configured to be able to communicate with the server 200. The server 200 is configured to be able to communicate with the service provider server 300. Data detected by the wearable device 110 is sent directly to the server 200.
図11に示すように、情報処理システム1Dは、前述の情報処理システム1Cと同様、ウェアラブルデバイス110と、スマートフォン120と、サーバ200と、サービス提供者サーバ300とを備える。ウェアラブルデバイス110はスマートフォン120と通信可能に構成されており、スマートフォン120はサーバ200と通信可能に構成されている。サーバ200はサービス提供者サーバ300と通信可能に構成されている。ウェアラブルデバイス110で検出されたデータは、いったんスマートフォン120に蓄積されてからサーバ200に送信される。
As shown in FIG. 11, information processing system 1D, like information processing system 1C described above, comprises a wearable device 110, a smartphone 120, a server 200, and a service provider server 300. The wearable device 110 is configured to be able to communicate with the smartphone 120, and the smartphone 120 is configured to be able to communicate with the server 200. The server 200 is configured to be able to communicate with the service provider server 300. Data detected by the wearable device 110 is first stored in the smartphone 120 and then transmitted to the server 200.
このような図10及び図11において、ウェアラブルデバイス110は前述の生体情報検出部20に相当し、スマートフォン120は前述の音入力部10や音出力部30などに相当する。つまり、スマートフォン120は音入力部10や音出力部30などを含む。サーバ200及びサービス提供者サーバ300の一方又は両方は情報処理装置50に相当する。
10 and 11, the wearable device 110 corresponds to the biometric information detection unit 20 described above, and the smartphone 120 corresponds to the sound input unit 10 and sound output unit 30 described above. In other words, the smartphone 120 includes the sound input unit 10 and sound output unit 30. One or both of the server 200 and the service provider server 300 corresponds to the information processing device 50.
なお、図10及び図11の例では、サービス提供者が保有するデバイスがサーバ、すなわちサービス提供者サーバ300であることが示されている。しかしながら、サービス提供者が保有するデバイスは、必ずしもサーバである必要はなく、スマートフォンやタブレット端末、ノートパソコン、デスクトップパソコンなどの情報端末であってもよい。
In the examples of Figures 10 and 11, the device owned by the service provider is shown to be a server, i.e., the service provider server 300. However, the device owned by the service provider does not necessarily have to be a server, and may be an information terminal such as a smartphone, a tablet terminal, a laptop computer, or a desktop computer.
<1-7.効果>
以上説明したように、本実施形態によれば、情報処理システム1は、ユーザの音声情報(例えば、ユーザ発話)に基づいて、ユーザの病状に関する管理指標値を推定する管理指標推定部51を備える。これにより、ユーザの音声情報から管理指標値を求めることが可能になるので、ユーザの負担を抑えつつ病状を定量的に把握することができる。 <1-7. Effects>
As described above, according to this embodiment, the information processing system 1 includes a managementindex estimation unit 51 that estimates a management index value related to the user's medical condition based on the user's voice information (e.g., user utterance). This makes it possible to obtain the management index value from the user's voice information, making it possible to quantitatively grasp the user's medical condition while reducing the burden on the user.
以上説明したように、本実施形態によれば、情報処理システム1は、ユーザの音声情報(例えば、ユーザ発話)に基づいて、ユーザの病状に関する管理指標値を推定する管理指標推定部51を備える。これにより、ユーザの音声情報から管理指標値を求めることが可能になるので、ユーザの負担を抑えつつ病状を定量的に把握することができる。 <1-7. Effects>
As described above, according to this embodiment, the information processing system 1 includes a management
また、管理指標値は、ユーザの疾患が呼吸器系疾患である場合の病状に関する管理指標値であってもよい。これにより、ユーザの疾患が呼吸器系疾患である場合の病状を定量的に把握することができる。
The management index value may also be a management index value related to the condition of the user's disease when the user's disease is a respiratory disease. This makes it possible to quantitatively grasp the condition of the user's disease when the user's disease is a respiratory disease.
また、管理指標推定部51は、ユーザの音声情報及びユーザの生体情報に基づいて管理指標値を推定してもよい。これにより、管理指標値を精度よく求めることができる。
The control index estimation unit 51 may also estimate the control index value based on the user's voice information and the user's biometric information. This allows the control index value to be determined with high accuracy.
また、管理指標推定部51は、ユーザの音声情報から音響特徴量を算出し、算出した音響特徴量に基づいて管理指標値を推定してもよい。これにより、管理指標値を精度よく求めることができる。
The control index estimation unit 51 may also calculate acoustic features from the user's voice information and estimate the control index value based on the calculated acoustic features. This allows the control index value to be determined with high accuracy.
また、管理指標推定部51は、学習モデルである管理指標推定モデル58aを用いて管理指標値を推定してもよい。これにより、管理指標値を精度よく求めることができる。
The management index estimation unit 51 may also estimate the management index value using the management index estimation model 58a, which is a learning model. This allows the management index value to be determined with high accuracy.
また、情報処理システム1は、管理指標推定モデル58aを生成するモデル生成部58をさらに備えてもよい。これにより、管理指標推定モデル58aを確実に得ることができる。
The information processing system 1 may further include a model generation unit 58 that generates a management index estimation model 58a. This makes it possible to reliably obtain the management index estimation model 58a.
また、モデル生成部58は、ユーザの疾患に関する症状が異なる患者ごとの音声情報と、音声情報ごとの管理指標測定値とを用いて、管理指標推定モデル58aを生成してもよい。これにより、精度が高い管理指標推定モデル58aを得ることができる。
The model generation unit 58 may also generate the management index estimation model 58a using voice information for each patient with different symptoms related to the user's disease and the management index measurement value for each voice information. This makes it possible to obtain a highly accurate management index estimation model 58a.
また、モデル生成部58は、ユーザの音声情報から音響特徴量を算出し、算出した音響特徴量から管理指標値を推定し、推定した管理指標値と、当該管理指標値に対応する管理指標測定値との誤差が最小となるように学習を行い、管理指標推定モデル58aを生成してもよい。これにより、精度が高い管理指標推定モデル58aを得ることができる。
The model generation unit 58 may also calculate acoustic features from the user's voice information, estimate a control index value from the calculated acoustic features, and perform learning so as to minimize the error between the estimated control index value and the control index measurement value corresponding to the control index value, thereby generating a control index estimation model 58a. This makes it possible to obtain a highly accurate control index estimation model 58a.
また、情報処理システム1は、管理指標値を保存する管理指標データベース52をさらに備えてもよい。これにより、管理指標値を管理することができる。
The information processing system 1 may further include a management index database 52 that stores the management index values. This allows the management index values to be managed.
また、情報処理システム1は、管理指標値に基づいて、ユーザの病状に関する応答情報を生成する応答生成部56をさらに備えてもよい。これにより、ユーザが病状を知ることができる。
The information processing system 1 may further include a response generation unit 56 that generates response information regarding the user's medical condition based on the management index value. This allows the user to know the medical condition.
また、応答情報は、ユーザの現在の病状に関する情報及びユーザの病状の増悪に関する情報の一方又は両方を含んでもよい。これにより、ユーザが病状の現在状態又は病状の増悪を知ることができる。
The response information may also include one or both of information regarding the user's current condition and information regarding the worsening of the user's condition. This allows the user to know the current state of the condition or the worsening of the condition.
また、ユーザの病状の増悪に関する情報は、ユーザの病状の増悪を抑えるための情報を含んでもよい。これにより、ユーザの病状悪化の抑止を支援することができる。
In addition, the information regarding the worsening of the user's condition may include information for preventing the worsening of the user's condition. This can help prevent the user's condition from worsening.
また、応答生成部56は、管理指標値及び第1の応答生成情報に基づいて応答情報を生成してもよい。これにより、適切な応答情報を得ることができる。
The response generation unit 56 may also generate response information based on the management index value and the first response generation information. This makes it possible to obtain appropriate response information.
また、第1の応答生成情報は、ユーザの音声情報に関するユーザ発話の意図に応じた応答情報を生成するための情報を含んでもよい。これにより、適切な応答情報を得ることができる。
The first response generation information may also include information for generating response information according to the intention of the user's speech regarding the user's voice information. This makes it possible to obtain appropriate response information.
また、情報処理システム1は、ユーザ発話の意図を解析して第1の応答生成情報を生成する意味解析部54をさらに備えてもよい。これにより、適切な第1の応答生成情報を得ることができる。
The information processing system 1 may further include a semantic analysis unit 54 that analyzes the intention of the user's utterance and generates the first response generation information. This makes it possible to obtain appropriate first response generation information.
また、応答生成部56は、管理指標値、第1の応答生成情報及びその第1の応答生成情報と異なる第2の応答生成情報に基づいて応答情報を生成してもよい。これにより、適切な応答情報を得ることができる。
The response generation unit 56 may also generate response information based on the management index value, the first response generation information, and the second response generation information that is different from the first response generation information. This makes it possible to obtain appropriate response information.
また、第2の応答生成情報は、ユーザに関する環境情報及びユーザのスケジュール情報の一方又は両方を含んでもよい。これにより、適切な応答情報を得ることができる。
The second response generation information may also include one or both of environmental information about the user and schedule information about the user. This makes it possible to obtain appropriate response information.
また、情報処理システム1は、第2の応答生成情報を保存する応答生成データベース55をさらに備えてもよい。これにより、第2の応答生成情報を管理することができる。
The information processing system 1 may further include a response generation database 55 that stores the second response generation information. This allows the second response generation information to be managed.
<2.他の実施形態>
上述した実施形態(実施例、変形例)に係る各構成や各処理などは、上記の実施形態以外にも種々の異なる形態にて実施されてもよい。例えば、構成や処理などは、上述した例に限らず、種々の態様であってもよい。また、例えば、上記実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した構成、処理手順、具体的名称、あるいは、各種のデータやパラメータなどを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。 2. Other embodiments
Each configuration and each process according to the above-mentioned embodiment (example, modified example) may be implemented in various different forms other than the above-mentioned embodiment. For example, the configuration and process may be various forms without being limited to the above-mentioned example. In addition, for example, among the processes described in the above-mentioned embodiment, all or part of the processes described as being performed automatically can be performed manually, or all or part of the processes described as being performed manually can be performed automatically by a known method. In addition, the configurations, processing procedures, specific names, or information including various data and parameters shown in the above documents and drawings can be changed arbitrarily unless otherwise specified. For example, the various information shown in each figure is not limited to the information shown in the figure.
上述した実施形態(実施例、変形例)に係る各構成や各処理などは、上記の実施形態以外にも種々の異なる形態にて実施されてもよい。例えば、構成や処理などは、上述した例に限らず、種々の態様であってもよい。また、例えば、上記実施形態において説明した各処理のうち、自動的に行われるものとして説明した処理の全部または一部を手動的に行うこともでき、あるいは、手動的に行われるものとして説明した処理の全部または一部を公知の方法で自動的に行うこともできる。この他、上記文書中や図面中で示した構成、処理手順、具体的名称、あるいは、各種のデータやパラメータなどを含む情報については、特記する場合を除いて任意に変更することができる。例えば、各図に示した各種情報は、図示した情報に限られない。 2. Other embodiments
Each configuration and each process according to the above-mentioned embodiment (example, modified example) may be implemented in various different forms other than the above-mentioned embodiment. For example, the configuration and process may be various forms without being limited to the above-mentioned example. In addition, for example, among the processes described in the above-mentioned embodiment, all or part of the processes described as being performed automatically can be performed manually, or all or part of the processes described as being performed manually can be performed automatically by a known method. In addition, the configurations, processing procedures, specific names, or information including various data and parameters shown in the above documents and drawings can be changed arbitrarily unless otherwise specified. For example, the various information shown in each figure is not limited to the information shown in the figure.
また、上述した実施形態(実施例、変形例)に係る各構成や各処理などは、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各装置の分散・統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況などに応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。
Furthermore, each configuration and each process of the above-mentioned embodiment (example, modified example) does not necessarily have to be physically configured as illustrated. In other words, the specific form of distribution and integration of each device is not limited to that illustrated, and all or part of it can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc.
また、上述した実施形態(実施例、変形例)に係る各構成や各処理などは、適宜組み合わせされてもよい。例えば、実施形態は少なくとも一部が他の実施形態の少なくとも一部と適宜組み合わされて実施されてもよい。また、実施形態における効果はあくまで例示であって限定されるものではなく、他の効果があってもよい。
Furthermore, the configurations and processes of the above-described embodiments (examples, modified examples) may be combined as appropriate. For example, at least a part of an embodiment may be implemented in combination with at least a part of another embodiment as appropriate. Furthermore, the effects of the embodiments are merely examples and are not limiting, and other effects may also be provided.
<3.ハードウェアの構成例>
上述した実施形態(又は変形例)に係る各種の情報機器の具体的なハードウェア構成例について説明する。実施形態(又は変形例)に係る各種の情報機器は、例えば、図12に示すような構成のコンピュータ500によって実現されてもよい。図12は、本実施形態に係るハードウェアの構成例を示す図である。 3. Hardware configuration example
A specific hardware configuration example of the various information devices according to the above-mentioned embodiment (or modified example) will be described. The various information devices according to the embodiment (or modified example) may be realized by acomputer 500 having a configuration as shown in Fig. 12, for example. Fig. 12 is a diagram showing a hardware configuration example according to this embodiment.
上述した実施形態(又は変形例)に係る各種の情報機器の具体的なハードウェア構成例について説明する。実施形態(又は変形例)に係る各種の情報機器は、例えば、図12に示すような構成のコンピュータ500によって実現されてもよい。図12は、本実施形態に係るハードウェアの構成例を示す図である。 3. Hardware configuration example
A specific hardware configuration example of the various information devices according to the above-mentioned embodiment (or modified example) will be described. The various information devices according to the embodiment (or modified example) may be realized by a
図12に示すように、コンピュータ500は、CPU510、RAM520、ROM(Read Only Memory)530、HDD(Hard Disk Drive)540、通信インターフェイス550及び入出力インターフェイス560を有する。コンピュータ500の各部は、バス570によって接続される。
As shown in FIG. 12, computer 500 has a CPU 510, RAM 520, ROM (Read Only Memory) 530, HDD (Hard Disk Drive) 540, a communication interface 550, and an input/output interface 560. Each part of computer 500 is connected by a bus 570.
CPU510は、ROM530又はHDD540に格納されたプログラムに基づいて動作し、各部の制御を行う。例えば、CPU510は、ROM530又はHDD540に格納されたプログラムをRAM520に展開し、各種プログラムに対応した処理を実行する。
The CPU 510 operates based on the programs stored in the ROM 530 or the HDD 540 and controls each part. For example, the CPU 510 loads the programs stored in the ROM 530 or the HDD 540 into the RAM 520 and executes processes corresponding to the various programs.
ROM530は、コンピュータ500の起動時にCPU510によって実行されるBIOS(Basic Input Output System)等のブートプログラムや、コンピュータ500のハードウェアに依存するプログラム等を格納する。
The ROM 530 stores boot programs such as the BIOS (Basic Input Output System) that is executed by the CPU 510 when the computer 500 starts up, as well as programs that depend on the hardware of the computer 500.
HDD540は、CPU510によって実行されるプログラム、及び、かかるプログラムによって使用されるデータ等を非一時的に記録する、コンピュータ500が読み取り可能な記録媒体である。具体的には、HDD540は、プログラムデータ541の一例である本開示に係る情報処理プログラムを記録する記録媒体である。
HDD 540 is a recording medium readable by computer 500 that non-temporarily records programs executed by CPU 510 and data used by such programs. Specifically, HDD 540 is a recording medium that records the information processing program related to the present disclosure, which is an example of program data 541.
通信インターフェイス550は、コンピュータ500が外部ネットワーク580(一例としてインターネット)と接続するためのインターフェイスである。例えば、CPU510は、通信インターフェイス550を介して、他の機器からデータを受信したり、CPU510が生成したデータを他の機器へ送信したりする。
The communication interface 550 is an interface for connecting the computer 500 to an external network 580 (the Internet as an example). For example, the CPU 510 receives data from other devices and transmits data generated by the CPU 510 to other devices via the communication interface 550.
入出力インターフェイス560は、入出力デバイス590とコンピュータ500とを接続するためのインターフェイスである。例えば、CPU510は、入出力インターフェイス560を介して、キーボードやマウス等の入力デバイスからデータを受信する。また、CPU510は、入出力インターフェイス560を介して、ディスプレイやスピーカやプリンタ等の出力デバイスにデータを送信する。
The input/output interface 560 is an interface for connecting the input/output device 590 and the computer 500. For example, the CPU 510 receives data from an input device such as a keyboard or a mouse via the input/output interface 560. The CPU 510 also transmits data to an output device such as a display, a speaker, or a printer via the input/output interface 560.
なお、入出力インターフェイス560は、所定の記録媒体(メディア)に記録されたプログラム等を読み取るメディアインターフェイスとして機能してもよい。メディアとしては、例えば、DVD(Digital Versatile Disc)、PD(Phase change rewritable Disk)等の光学記録媒体、MO(Magneto-Optical disk)等の光磁気記録媒体、テープ媒体、磁気記録媒体、又は、半導体メモリ等が用いられる。
The input/output interface 560 may also function as a media interface that reads programs and the like recorded on a specific recording medium. Examples of media that can be used include optical recording media such as DVDs (Digital Versatile Discs) and PDs (Phase change rewritable Disks), magneto-optical recording media such as MOs (Magneto-Optical disks), tape media, magnetic recording media, and semiconductor memories.
ここで、例えば、コンピュータ500が本実施形態に係る情報処理装置50として機能する場合、コンピュータ500のCPU510は、RAM520上にロードされた情報処理プログラムを実行することにより、管理指標推定部51や音声認識部53、意味解析部54、応答生成部56、応答制御部57などの機能の全てや一部を実現する。また、HDD540には、本実施形態に係る情報処理プログラムやデータが格納される。なお、CPU510は、プログラムデータ541をHDD540から読み取って実行するが、他の例として、外部ネットワーク580を介して、他の装置からこれらのプログラムを取得するようにしてもよい。
Here, for example, when the computer 500 functions as the information processing device 50 according to this embodiment, the CPU 510 of the computer 500 executes the information processing program loaded on the RAM 520 to realize all or part of the functions of the management index estimation unit 51, the voice recognition unit 53, the semantic analysis unit 54, the response generation unit 56, the response control unit 57, etc. Also, the information processing program and data according to this embodiment are stored in the HDD 540. Note that the CPU 510 reads and executes the program data 541 from the HDD 540, but as another example, these programs may be obtained from other devices via the external network 580.
<4.付記>
なお、本技術は以下のような構成も取ることができる。
(1)
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定する管理指標推定部を備える、
情報処理システム。
(2)
前記管理指標値は、前記ユーザの疾患が呼吸器系疾患である場合の病状に関する管理指標値である、
前記(1)に記載の情報処理システム。
(3)
前記管理指標推定部は、前記音声情報及び前記ユーザの生体情報に基づいて前記管理指標値を推定する、
前記(1)又は(2)に記載の情報処理システム。
(4)
前記管理指標推定部は、前記音声情報から音響特徴量を算出し、算出した音響特徴量に基づいて前記管理指標値を推定する、
前記(1)から(3)のいずれか一つに記載の情報処理システム。
(5)
前記管理指標推定部は、学習モデルである管理指標推定モデルを用いて前記管理指標値を推定する、
前記(4)に記載の情報処理システム。
(6)
前記管理指標推定モデルを生成するモデル生成部をさらに備える、
前記(5)に記載の情報処理システム。
(7)
前記モデル生成部は、前記ユーザの疾患に関する症状が異なる患者ごとの音声情報と、前記音声情報ごとの管理指標測定値とを用いて、前記管理指標推定モデルを生成する、
前記(6)に記載の情報処理システム。
(8)
前記モデル生成部は、前記音声情報から音響特徴量を算出し、算出した前記音響特徴量から管理指標値を推定し、推定した前記管理指標値と、当該管理指標値に対応する前記管理指標測定値との誤差が最小となるように学習を行い、前記管理指標推定モデルを生成する、
前記(7)に記載の情報処理システム。
(9)
前記管理指標値を保存する管理指標データベースをさらに備える、
前記(1)から(8)のいずれか一つに記載の情報処理システム。
(10)
前記管理指標値に基づいて、前記ユーザの病状に関する応答情報を生成する応答生成部をさらに備える、
前記(9)に記載の情報処理システム。
(11)
前記応答情報は、前記ユーザの現在の病状に関する情報及び前記ユーザの病状の増悪に関する情報の一方又は両方を含む、
前記(10)に記載の情報処理システム。
(12)
前記ユーザの病状の増悪に関する情報は、前記ユーザの病状の増悪を抑えるための情報を含む、
前記(11)に記載の情報処理システム。
(13)
前記応答生成部は、前記管理指標値及び第1の応答生成情報に基づいて前記応答情報を生成する、
前記(10)から(12)のいずれか一つに記載の情報処理システム。
(14)
前記第1の応答生成情報は、前記音声情報に関するユーザ発話の意図に応じた前記応答情報を生成するための情報を含む、
前記(13)に記載の情報処理システム。
(15)
前記ユーザ発話の意図を解析して前記第1の応答生成情報を生成する意味解析部をさらに備える、
前記(14)に記載の情報処理システム。
(16)
前記応答生成部は、前記管理指標値、前記第1の応答生成情報及び前記第1の応答生成情報と異なる第2の応答生成情報に基づいて前記応答情報を生成する、
前記(13)から(15)のいずれか一つに記載の情報処理システム。
(17)
前記第2の応答生成情報は、前記ユーザに関する環境情報及び前記ユーザのスケジュール情報の一方又は両方を含む、
前記(16)に記載の情報処理システム。
(18)
前記第2の応答生成情報を保存する応答生成データベースをさらに備える、
前記(16)又は(17)に記載の情報処理システム。
(19)
コンピュータが、
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定する、
情報処理方法。
(20)
コンピュータが、
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定するための管理指標推定モデルを生成する、
学習モデルの生成方法。
(21)
前記(1)から(18)のいずれか一つに記載の情報処理システムを用いる、情報処理方法。
(22)
前記(1)から(18)のいずれか一つに記載の情報処理システムに関する学習モデルを生成する、学習モデルの生成方法。 <4. Notes>
The present technology can also be configured as follows.
(1)
a control index estimation unit that estimates a control index value related to a medical condition of the user based on voice information of the user;
Information processing system.
(2)
The management index value is a management index value related to a medical condition when the disease of the user is a respiratory system disease.
The information processing system according to (1) above.
(3)
the control index estimation unit estimates the control index value based on the voice information and the biometric information of the user.
The information processing system according to (1) or (2).
(4)
the control index estimation unit calculates an acoustic feature from the speech information, and estimates the control index value based on the calculated acoustic feature.
The information processing system according to any one of (1) to (3).
(5)
the management index estimation unit estimates the management index value using a management index estimation model which is a learning model;
The information processing system according to (4) above.
(6)
A model generation unit that generates the management index estimation model,
The information processing system according to (5) above.
(7)
The model generation unit generates the management index estimation model using voice information for each patient having a different symptom related to the user's disease and a management index measurement value for each of the voice information.
The information processing system according to (6) above.
(8)
the model generation unit calculates acoustic features from the speech information, estimates a control index value from the calculated acoustic features, performs learning so as to minimize an error between the estimated control index value and the control index measurement value corresponding to the estimated control index value, and generates the control index estimation model.
The information processing system according to (7) above.
(9)
A management index database for storing the management index values is further provided.
The information processing system according to any one of (1) to (8).
(10)
A response generating unit that generates response information regarding the medical condition of the user based on the management index value.
The information processing system according to (9) above.
(11)
The response information includes one or both of information regarding the user's current medical condition and information regarding the worsening of the user's medical condition.
The information processing system according to (10) above.
(12)
The information regarding the worsening of the user's condition includes information for suppressing the worsening of the user's condition.
The information processing system according to (11) above.
(13)
The response generation unit generates the response information based on the management index value and first response generation information.
The information processing system according to any one of (10) to (12).
(14)
The first response generation information includes information for generating the response information according to an intention of a user's utterance regarding the voice information.
The information processing system according to (13) above.
(15)
The system further includes a semantic analysis unit that analyzes the intention of the user utterance to generate the first response generation information.
The information processing system according to (14) above.
(16)
the response generation unit generates the response information based on the management index value, the first response generation information, and second response generation information different from the first response generation information.
The information processing system according to any one of (13) to (15).
(17)
The second response generation information includes one or both of environmental information about the user and schedule information of the user.
The information processing system according to (16) above.
(18)
a response generation database for storing the second response generation information;
The information processing system according to (16) or (17).
(19)
The computer
estimating a management index value related to the user's medical condition based on the user's voice information;
Information processing methods.
(20)
The computer
generating a control index estimation model for estimating a control index value related to a medical condition of the user based on the voice information of the user;
How to generate a learning model.
(21)
An information processing method using the information processing system according to any one of (1) to (18).
(22)
A method for generating a learning model, which generates a learning model for an information processing system described in any one of (1) to (18).
なお、本技術は以下のような構成も取ることができる。
(1)
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定する管理指標推定部を備える、
情報処理システム。
(2)
前記管理指標値は、前記ユーザの疾患が呼吸器系疾患である場合の病状に関する管理指標値である、
前記(1)に記載の情報処理システム。
(3)
前記管理指標推定部は、前記音声情報及び前記ユーザの生体情報に基づいて前記管理指標値を推定する、
前記(1)又は(2)に記載の情報処理システム。
(4)
前記管理指標推定部は、前記音声情報から音響特徴量を算出し、算出した音響特徴量に基づいて前記管理指標値を推定する、
前記(1)から(3)のいずれか一つに記載の情報処理システム。
(5)
前記管理指標推定部は、学習モデルである管理指標推定モデルを用いて前記管理指標値を推定する、
前記(4)に記載の情報処理システム。
(6)
前記管理指標推定モデルを生成するモデル生成部をさらに備える、
前記(5)に記載の情報処理システム。
(7)
前記モデル生成部は、前記ユーザの疾患に関する症状が異なる患者ごとの音声情報と、前記音声情報ごとの管理指標測定値とを用いて、前記管理指標推定モデルを生成する、
前記(6)に記載の情報処理システム。
(8)
前記モデル生成部は、前記音声情報から音響特徴量を算出し、算出した前記音響特徴量から管理指標値を推定し、推定した前記管理指標値と、当該管理指標値に対応する前記管理指標測定値との誤差が最小となるように学習を行い、前記管理指標推定モデルを生成する、
前記(7)に記載の情報処理システム。
(9)
前記管理指標値を保存する管理指標データベースをさらに備える、
前記(1)から(8)のいずれか一つに記載の情報処理システム。
(10)
前記管理指標値に基づいて、前記ユーザの病状に関する応答情報を生成する応答生成部をさらに備える、
前記(9)に記載の情報処理システム。
(11)
前記応答情報は、前記ユーザの現在の病状に関する情報及び前記ユーザの病状の増悪に関する情報の一方又は両方を含む、
前記(10)に記載の情報処理システム。
(12)
前記ユーザの病状の増悪に関する情報は、前記ユーザの病状の増悪を抑えるための情報を含む、
前記(11)に記載の情報処理システム。
(13)
前記応答生成部は、前記管理指標値及び第1の応答生成情報に基づいて前記応答情報を生成する、
前記(10)から(12)のいずれか一つに記載の情報処理システム。
(14)
前記第1の応答生成情報は、前記音声情報に関するユーザ発話の意図に応じた前記応答情報を生成するための情報を含む、
前記(13)に記載の情報処理システム。
(15)
前記ユーザ発話の意図を解析して前記第1の応答生成情報を生成する意味解析部をさらに備える、
前記(14)に記載の情報処理システム。
(16)
前記応答生成部は、前記管理指標値、前記第1の応答生成情報及び前記第1の応答生成情報と異なる第2の応答生成情報に基づいて前記応答情報を生成する、
前記(13)から(15)のいずれか一つに記載の情報処理システム。
(17)
前記第2の応答生成情報は、前記ユーザに関する環境情報及び前記ユーザのスケジュール情報の一方又は両方を含む、
前記(16)に記載の情報処理システム。
(18)
前記第2の応答生成情報を保存する応答生成データベースをさらに備える、
前記(16)又は(17)に記載の情報処理システム。
(19)
コンピュータが、
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定する、
情報処理方法。
(20)
コンピュータが、
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定するための管理指標推定モデルを生成する、
学習モデルの生成方法。
(21)
前記(1)から(18)のいずれか一つに記載の情報処理システムを用いる、情報処理方法。
(22)
前記(1)から(18)のいずれか一つに記載の情報処理システムに関する学習モデルを生成する、学習モデルの生成方法。 <4. Notes>
The present technology can also be configured as follows.
(1)
a control index estimation unit that estimates a control index value related to a medical condition of the user based on voice information of the user;
Information processing system.
(2)
The management index value is a management index value related to a medical condition when the disease of the user is a respiratory system disease.
The information processing system according to (1) above.
(3)
the control index estimation unit estimates the control index value based on the voice information and the biometric information of the user.
The information processing system according to (1) or (2).
(4)
the control index estimation unit calculates an acoustic feature from the speech information, and estimates the control index value based on the calculated acoustic feature.
The information processing system according to any one of (1) to (3).
(5)
the management index estimation unit estimates the management index value using a management index estimation model which is a learning model;
The information processing system according to (4) above.
(6)
A model generation unit that generates the management index estimation model,
The information processing system according to (5) above.
(7)
The model generation unit generates the management index estimation model using voice information for each patient having a different symptom related to the user's disease and a management index measurement value for each of the voice information.
The information processing system according to (6) above.
(8)
the model generation unit calculates acoustic features from the speech information, estimates a control index value from the calculated acoustic features, performs learning so as to minimize an error between the estimated control index value and the control index measurement value corresponding to the estimated control index value, and generates the control index estimation model.
The information processing system according to (7) above.
(9)
A management index database for storing the management index values is further provided.
The information processing system according to any one of (1) to (8).
(10)
A response generating unit that generates response information regarding the medical condition of the user based on the management index value.
The information processing system according to (9) above.
(11)
The response information includes one or both of information regarding the user's current medical condition and information regarding the worsening of the user's medical condition.
The information processing system according to (10) above.
(12)
The information regarding the worsening of the user's condition includes information for suppressing the worsening of the user's condition.
The information processing system according to (11) above.
(13)
The response generation unit generates the response information based on the management index value and first response generation information.
The information processing system according to any one of (10) to (12).
(14)
The first response generation information includes information for generating the response information according to an intention of a user's utterance regarding the voice information.
The information processing system according to (13) above.
(15)
The system further includes a semantic analysis unit that analyzes the intention of the user utterance to generate the first response generation information.
The information processing system according to (14) above.
(16)
the response generation unit generates the response information based on the management index value, the first response generation information, and second response generation information different from the first response generation information.
The information processing system according to any one of (13) to (15).
(17)
The second response generation information includes one or both of environmental information about the user and schedule information of the user.
The information processing system according to (16) above.
(18)
a response generation database for storing the second response generation information;
The information processing system according to (16) or (17).
(19)
The computer
estimating a management index value related to the user's medical condition based on the user's voice information;
Information processing methods.
(20)
The computer
generating a control index estimation model for estimating a control index value related to a medical condition of the user based on the voice information of the user;
How to generate a learning model.
(21)
An information processing method using the information processing system according to any one of (1) to (18).
(22)
A method for generating a learning model, which generates a learning model for an information processing system described in any one of (1) to (18).
1 情報処理システム
1A 情報処理システム
1B 情報処理システム
1C 情報処理システム
1D 情報処理システム
10 音入力部
20 生体情報検出部
30 音出力部
40 ユーザ端末
50 情報処理装置
51 管理指標推定部
52 管理指標データベース
53 音声認識部
54 意味解析部
55 応答生成データベース
56 応答生成部
57 応答制御部
58 モデル生成部
58a 管理指標推定モデル
58b 発話音声データベース
58c 管理指標測定値データベース REFERENCE SIGNS LIST 1Information processing system 1A Information processing system 1B Information processing system 1C Information processing system 1D Information processing system 10 Sound input unit 20 Biometric information detection unit 30 Sound output unit 40 User terminal 50 Information processing device 51 Management index estimation unit 52 Management index database 53 Voice recognition unit 54 Semantic analysis unit 55 Response generation database 56 Response generation unit 57 Response control unit 58 Model generation unit 58a Management index estimation model 58b Speech voice database 58c Management index measurement value database
1A 情報処理システム
1B 情報処理システム
1C 情報処理システム
1D 情報処理システム
10 音入力部
20 生体情報検出部
30 音出力部
40 ユーザ端末
50 情報処理装置
51 管理指標推定部
52 管理指標データベース
53 音声認識部
54 意味解析部
55 応答生成データベース
56 応答生成部
57 応答制御部
58 モデル生成部
58a 管理指標推定モデル
58b 発話音声データベース
58c 管理指標測定値データベース REFERENCE SIGNS LIST 1
Claims (20)
- ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定する管理指標推定部を備える、
情報処理システム。 a control index estimation unit that estimates a control index value related to a medical condition of the user based on voice information of the user;
Information processing system. - 前記管理指標値は、前記ユーザの疾患が呼吸器系疾患である場合の病状に関する管理指標値である、
請求項1に記載の情報処理システム。 The management index value is a management index value related to a medical condition when the disease of the user is a respiratory system disease.
The information processing system according to claim 1 . - 前記管理指標推定部は、前記音声情報及び前記ユーザの生体情報に基づいて前記管理指標値を推定する、
請求項1に記載の情報処理システム。 the control index estimation unit estimates the control index value based on the voice information and the biometric information of the user.
The information processing system according to claim 1 . - 前記管理指標推定部は、前記音声情報から音響特徴量を算出し、算出した音響特徴量に基づいて前記管理指標値を推定する、
請求項1に記載の情報処理システム。 the control index estimation unit calculates an acoustic feature from the speech information, and estimates the control index value based on the calculated acoustic feature.
The information processing system according to claim 1 . - 前記管理指標推定部は、学習モデルである管理指標推定モデルを用いて前記管理指標値を推定する、
請求項4に記載の情報処理システム。 the management index estimation unit estimates the management index value using a management index estimation model which is a learning model;
5. The information processing system according to claim 4. - 前記管理指標推定モデルを生成するモデル生成部をさらに備える、
請求項5に記載の情報処理システム。 A model generation unit that generates the management index estimation model,
6. The information processing system according to claim 5. - 前記モデル生成部は、前記ユーザの疾患に関する症状が異なる患者ごとの音声情報と、前記音声情報ごとの管理指標測定値とを用いて、前記管理指標推定モデルを生成する、
請求項6に記載の情報処理システム。 The model generation unit generates the management index estimation model using voice information for each patient having a different symptom related to the user's disease and a management index measurement value for each of the voice information.
7. The information processing system according to claim 6. - 前記モデル生成部は、前記音声情報から音響特徴量を算出し、算出した前記音響特徴量から管理指標値を推定し、推定した前記管理指標値と、当該管理指標値に対応する前記管理指標測定値との誤差が最小となるように学習を行い、前記管理指標推定モデルを生成する、
請求項7に記載の情報処理システム。 the model generation unit calculates acoustic features from the speech information, estimates a control index value from the calculated acoustic features, performs learning so as to minimize an error between the estimated control index value and the control index measurement value corresponding to the estimated control index value, and generates the control index estimation model.
The information processing system according to claim 7. - 前記管理指標値を保存する管理指標データベースをさらに備える、
請求項1に記載の情報処理システム。 A management index database for storing the management index values is further provided.
The information processing system according to claim 1 . - 前記管理指標値に基づいて、前記ユーザの病状に関する応答情報を生成する応答生成部をさらに備える、
請求項9に記載の情報処理システム。 A response generating unit that generates response information regarding the medical condition of the user based on the management index value.
10. The information processing system according to claim 9. - 前記応答情報は、前記ユーザの現在の病状に関する情報及び前記ユーザの病状の増悪に関する情報の一方又は両方を含む、
請求項10に記載の情報処理システム。 The response information includes one or both of information regarding the user's current medical condition and information regarding the worsening of the user's medical condition.
The information processing system according to claim 10. - 前記ユーザの病状の増悪に関する情報は、前記ユーザの病状の増悪を抑えるための情報を含む、
請求項11に記載の情報処理システム。 The information regarding the worsening of the user's condition includes information for suppressing the worsening of the user's condition.
The information processing system according to claim 11. - 前記応答生成部は、前記管理指標値及び第1の応答生成情報に基づいて前記応答情報を生成する、
請求項10に記載の情報処理システム。 The response generation unit generates the response information based on the management index value and first response generation information.
The information processing system according to claim 10. - 前記第1の応答生成情報は、前記音声情報に関するユーザ発話の意図に応じた前記応答情報を生成するための情報を含む、
請求項13に記載の情報処理システム。 The first response generation information includes information for generating the response information according to an intention of a user's utterance regarding the voice information.
The information processing system according to claim 13. - 前記ユーザ発話の意図を解析して前記第1の応答生成情報を生成する意味解析部をさらに備える、
請求項14に記載の情報処理システム。 The system further includes a semantic analysis unit that analyzes the intention of the user utterance to generate the first response generation information.
15. The information processing system according to claim 14. - 前記応答生成部は、前記管理指標値、前記第1の応答生成情報及び前記第1の応答生成情報と異なる第2の応答生成情報に基づいて前記応答情報を生成する、
請求項13に記載の情報処理システム。 the response generation unit generates the response information based on the management index value, the first response generation information, and second response generation information different from the first response generation information.
The information processing system according to claim 13. - 前記第2の応答生成情報は、前記ユーザに関する環境情報及び前記ユーザのスケジュール情報の一方又は両方を含む、
請求項16に記載の情報処理システム。 The second response generation information includes one or both of environmental information about the user and schedule information of the user.
17. The information processing system according to claim 16. - 前記第2の応答生成情報を保存する応答生成データベースをさらに備える、
請求項16に記載の情報処理システム。 a response generation database for storing the second response generation information;
17. The information processing system according to claim 16. - コンピュータが、
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定する、
情報処理方法。 The computer
estimating a management index value related to the user's medical condition based on the user's voice information;
Information processing methods. - コンピュータが、
ユーザの音声情報に基づいて、前記ユーザの病状に関する管理指標値を推定するための管理指標推定モデルを生成する、
学習モデルの生成方法。 The computer
generating a control index estimation model for estimating a control index value related to a medical condition of the user based on the voice information of the user;
How to generate a learning model.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023-002672 | 2023-01-11 | ||
JP2023002672 | 2023-01-11 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024150703A1 true WO2024150703A1 (en) | 2024-07-18 |
Family
ID=91897058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/047268 WO2024150703A1 (en) | 2023-01-11 | 2023-12-28 | Information processing system, information processing method, and method for generating learning model |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024150703A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200105381A1 (en) * | 2018-09-27 | 2020-04-02 | Microsoft Technology Licensing, Llc | Gathering data in a communication system |
JP2021523812A (en) * | 2018-05-14 | 2021-09-09 | レスピア テクノロジーズ ピーティーワイ リミテッド | Methods and devices for determining the potential onset of an acute medical condition |
JP2022062701A (en) * | 2020-10-08 | 2022-04-20 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Computer packaging method, system and computer program product (multi-modal lung capacity measurement for respiratory illness prediction) |
US20220246286A1 (en) * | 2021-02-04 | 2022-08-04 | Unitedhealth Group Incorporated | Use of audio data for matching patients with healthcare providers |
-
2023
- 2023-12-28 WO PCT/JP2023/047268 patent/WO2024150703A1/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021523812A (en) * | 2018-05-14 | 2021-09-09 | レスピア テクノロジーズ ピーティーワイ リミテッド | Methods and devices for determining the potential onset of an acute medical condition |
US20200105381A1 (en) * | 2018-09-27 | 2020-04-02 | Microsoft Technology Licensing, Llc | Gathering data in a communication system |
JP2022062701A (en) * | 2020-10-08 | 2022-04-20 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Computer packaging method, system and computer program product (multi-modal lung capacity measurement for respiratory illness prediction) |
US20220246286A1 (en) * | 2021-02-04 | 2022-08-04 | Unitedhealth Group Incorporated | Use of audio data for matching patients with healthcare providers |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7509960B2 (en) | Mild cognitive impairment assessment system | |
JP2021524958A (en) | Respiratory state management based on respiratory sounds | |
Rourke et al. | Tablet audiometry in Canada’s north: a portable and efficient method for hearing screening | |
US9691411B2 (en) | System and method for assessing suicide risk of a patient based upon non-verbal characteristics of voice data | |
JP2019084249A (en) | Dementia diagnosis apparatus, dementia diagnosis method, and dementia diagnosis program | |
WO2015054240A1 (en) | Computer implemented method, computer system and software for reducing errors associated with a situated interaction | |
JP6088139B2 (en) | Performing subject measurements | |
Weerathunge et al. | Accuracy of acoustic measures of voice via telepractice videoconferencing platforms | |
CA3190906A1 (en) | Computerized decision support tool and medical device for respiratory condition monitoring and care | |
Echternach et al. | The impact of a standardized vocal loading test on vocal fold oscillations | |
WO2021238445A1 (en) | Identity information unification method, apparatus, and electronic device | |
Ooster et al. | Speech audiometry at home: automated listening tests via smart speakers with normal-hearing and hearing-impaired listeners | |
Hagberg et al. | The impact of maxillary advancement on consonant proficiency in patients with cleft lip and palate, lay listeners’ opinion, and patients’ satisfaction with speech | |
Hassan et al. | Assessment of dysphonia: cepstral analysis versus conventional acoustic analysis | |
Zolnoori et al. | Audio recording patient-nurse verbal communications in home health care settings: pilot feasibility and usability study | |
Oates et al. | Gender-affirming voice training for trans women: effectiveness of training on patient-reported outcomes and listener perceptions of voice | |
WO2024150703A1 (en) | Information processing system, information processing method, and method for generating learning model | |
JP7322818B2 (en) | Estimation system and simulation system | |
Mohktar et al. | A guideline-based decision support system for generating referral recommendations from routinely recorded home telehealth measurement data | |
Calvache Mora et al. | Systematic review of literature on vocal demand response: understanding physiology, measurements, and associated factors | |
Måseide | Body work in respiratory physiological examinations | |
Huang et al. | Analysis of velopharyngeal functions using computational fluid dynamics simulations | |
Saeedi et al. | Relationship between aerodynamic measurement of Maximum Phonation Time with acoustic analysis and the effects of sex and dysphonia type | |
US20220102015A1 (en) | Collaborative smart screen | |
WO2019073900A1 (en) | Machine learning device, determination device, machine learning method, determination method, learning program, determination program, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23916316 Country of ref document: EP Kind code of ref document: A1 |