CN103699226B - A kind of three mode serial brain-computer interface methods based on Multi-information acquisition - Google Patents

A kind of three mode serial brain-computer interface methods based on Multi-information acquisition Download PDF

Info

Publication number
CN103699226B
CN103699226B CN201310722162.1A CN201310722162A CN103699226B CN 103699226 B CN103699226 B CN 103699226B CN 201310722162 A CN201310722162 A CN 201310722162A CN 103699226 B CN103699226 B CN 103699226B
Authority
CN
China
Prior art keywords
data
brain
result
state
occlusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310722162.1A
Other languages
Chinese (zh)
Other versions
CN103699226A (en
Inventor
明东
陈龙
汤佳贝
安兴伟
计益凡
綦宏志
赵欣
张力新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201310722162.1A priority Critical patent/CN103699226B/en
Publication of CN103699226A publication Critical patent/CN103699226A/en
Application granted granted Critical
Publication of CN103699226B publication Critical patent/CN103699226B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a kind of three mode serial brain-machine interface methods based on Multi-information acquisition, comprise the following steps: use two kinds of visual stimulus normal forms that subjects is stimulated;Extract the eeg data of subjects;Relevant parameter is set, reads eeg data, eeg data is carried out pretreatment, feature extraction and pattern recognition, obtains final pattern recognition result;Final pattern recognition result is converted to control instruction, completes specific task by performing control instruction.This mixing normal form brain-computer interface introduces the electro physiology control signal in addition to EEG signals, has expanded suitable environment and the object of brain-computer interface to a certain extent.It has that stability is higher, more options item and the advantage such as applied widely, and the time application stage on a large scale of stepping into as early as possible for brain-computer interface lays the foundation.This invention may be used for the field such as electronic entertainment, Industry Control, it is possible to the brain machine interface system improved, and is expected to obtain considerable Social benefit and economic benefit.

Description

Three-mode serial brain-computer interface method based on multi-information fusion
Technical Field
The invention relates to the field of human-computer interfaces, in particular to a three-modal serial brain-computer interface method based on multi-information fusion.
Background
The man-machine interface is an interface of input/output equipment for establishing contact and exchanging information in the man-machine interaction interface. Human-computer interaction is a study of the interactive relationships between a research system and a user. The system may be a variety of machines, and may be a computerized system and software. The man-machine interface is a control circuit for realizing information transmission between the computer and the man-machine interaction equipment. It and man-machine interaction equipment together implement the functions of information form conversion and information transmission control. The human-computer interaction interface is designed to include the user's understanding of the system (i.e., mental models) in order to improve the usability or user-friendliness of the system.
As a new type of human-computer interface, brain-computer interface (BCI) is a communication control system that does not rely on the normal output channels of the peripheral nerves and muscles of the brain. In the current research result, the brain electrical signals of people in different states are collected and analyzed, and then a direct communication and control channel is established between the brain and a computer or other electronic equipment by using a certain engineering technical means, so that a brand-new information exchange and control technology is realized, namely, the intention is expressed or external equipment is operated directly by controlling the brain electrical signals without language or limb actions. Two common evoked modes for existing brain-computer interfaces are endogenous ERP (event-related potentials) and visual evoked potentials. Both the two modes can well build a communication and control channel between a user and equipment.
Some operations and activities in real life are complex, and more steps and different operation flows are required for completing a specific task. Thus, the single paradigm of the BCI system is not sufficient to support a user to accomplish a particular task and action in daily life. For example, the auxiliary gripping of a cup for drinking needs to complete two basic functions of movement and gripping without considering the movement speed and the gripping degree, which is realized by the fact that the BCI in a single paradigm cannot output water at the same time. The existing brain-computer interface equipment utilizes a single type of electroencephalogram signal to operate a human-computer interface, so that the application range is narrow, the operation flexibility is poor, the operation instruction set is few, and the user friendliness and the usability of the system cannot be embodied. For example, the P300-teller in the conventional stimulus encoding mode is not favorable for realizing information transmission of a large instruction set, and has the problems of low information transmission efficiency, limited number of selectable characters and the like, so that the requirements of practical application are difficult to meet.
Disclosure of Invention
The invention provides a three-mode serial brain-computer interface method based on multi-information fusion, which increases the number of operation instruction sets of a BCI system, has better operation flexibility, is more suitable for real life scenes, and is described in detail as follows:
a tri-modal serial brain-computer interface method based on multi-information fusion, the method comprising the steps of:
(1) stimulating the testee by adopting two visual stimulation paradigms;
(2) extracting electroencephalogram data of a subject;
(3) setting related parameters, reading electroencephalogram data, preprocessing the electroencephalogram data, extracting features and identifying modes to obtain a final mode identification result;
(4) and converting the final mode recognition result into a control instruction, and executing the control instruction to complete a specific task.
The step of extracting the electroencephalogram data of the testee specifically comprises the following steps:
the BCI2000 is connected with acquisition software by using a TCP/IP protocol provided by Scan4.5 software, and the real-time acquisition and reading of electroencephalogram data are realized by a FieldTrip tool package.
The steps of setting relevant parameters, reading electroencephalogram data, preprocessing the electroencephalogram data, extracting features and identifying modes, and obtaining a final mode identification result specifically comprise:
1) firstly, calling parameters;
2) reading real-time electroencephalogram data, and formally entering a data processing stage from the second;
3) when data is processed, firstly, judging the cursor moving state according to the sequence, intercepting electroencephalogram data 2s before the current time, performing typical correlation analysis to obtain a maximum typical correlation coefficient, accumulating the maximum typical correlation coefficient with the maximum typical correlation coefficient obtained by the data 1s, comparing an accumulated result with a set threshold value after accumulating for 3 times, counting the result 3 times if the accumulated result is greater than the threshold value, and sending a mode identification result after mode identification is carried out;
4) if the time domain characteristics are not larger than the threshold value, judging the occlusion operation, processing and analyzing the last 1s data of the intercepted 2s data, performing time domain analysis of the occlusion operation, extracting the time domain characteristics, judging, if the time domain characteristics are judged to be the long-term occlusion operation, indicating that the mouse enters a mouse clicking state, and sending a mode identification result; if the short-time occlusion operation is performed, entering a pre-opening mode of a character spelling state; if judging that no occlusion operation exists, representing that the mobile terminal is in an idle state;
5) and after the result of one round is output, continuing to judge the state of the next round and output the result.
The cursor movement state is to detect and judge that the user is watching the SSVEP flashing light paradigm, and then convert the final pattern recognition result into a control instruction of cursor movement;
the spelling state of the character is to detect and judge that a user starts a P300 mode, and watch a target character in an Oddball paradigm, so that a final mode recognition result is converted into a control instruction input by a keyboard;
the mouse clicking state is used for detecting and judging that a user executes long-term occlusion action, and then a final mode recognition result is converted into a control instruction for clicking a left mouse button;
the idle state is that when no information characteristic is detected from the data of the user, the user is judged to be in the idle state, and when the idle state is detected, no result is output and no operation is executed.
The long-time-range occlusion operation is that the occlusion duration is longer than 600 ms; the short-time bite operation is that the bite duration is less than 600 ms.
The technical scheme provided by the invention has the beneficial effects that: the mixed normal brain-computer interface designed by the method introduces electrophysiological control signals except for electroencephalogram signals, and expands the application environment and objects of the brain-computer interface to a certain extent. The method has the advantages of high stability, multiple options, wide application range and the like, and lays a foundation for the brain-computer interface to step into the application stage in a large range of time as soon as possible. The invention can be used in the fields of electronic entertainment, industrial control and the like, can obtain a perfect brain-computer interface system through further research, and is expected to obtain considerable social and economic benefits.
Drawings
FIG. 1 is a schematic diagram of a tri-modal serial brain-computer interface method based on multi-information fusion;
FIG. 2 (a) is a schematic diagram of the Oddball row and column paradigm;
FIG. 2 (b) is a schematic diagram of a steady state visual evoked potential flashing paradigm;
FIG. 3 is a schematic diagram of extracting electroencephalogram data of a subject;
FIG. 4 is a flow chart of obtaining a final pattern recognition result;
fig. 5 is a schematic diagram of analysis of an electroencephalogram signal by using a CCA algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
A hybrid brain-computer interface (hBCI) is a brain-computer interface that is used in conjunction with at least one other system or device to help humans send information. Other communication systems here may be: another brain-computer interface device (pure hBCI); devices based on other physiological signals (physiological hBCI), such as myoelectricity, eye electricity, and heart rate, etc.; other communication devices (hybrid hBCI) may be auxiliary devices or common input devices (keyboard and mouse, etc.) applied to disabled people. The mixed paradigm brain-computer interface can be divided into two basic types, serial and parallel, according to its hybrid control. Different signals in the serial mode are controlled according to the sequence, and the false positive rate of the system can be effectively reduced by the system; the parallel mode is that two sense modes are simultaneously and cooperatively controlled, which is equivalent to increasing the recognizable task number of the system.
Daily activities and tasks can generally be broken down into fixed steps to complete. Therefore, the serial mixed normal brain-computer interface solves the application bottleneck of the single normal BCI, can realize daily operation activities to a certain extent, and can meet the requirements of daily life to a certain extent.
In order to increase the number of operation instruction sets of the BCI system and improve the flexibility of operation, an embodiment of the present invention provides a three-modality serial brain-computer interface based on multi-information fusion, which is shown in fig. 1 and described in detail below:
101: stimulating the testee by adopting two visual stimulation paradigms;
the designed visual stimulus comprises the traditional Oddball matrix (evoked event-related potentials)[1]And Steady State Visual Evoked Potential (SSVEP)[2]The flashing light paradigm of (a). Schematic diagrams are shown in fig. 2-a and b, respectively. The traditional Oddball line and column paradigm is designed and realized by an Eprime software platform, and the flashing light paradigm is designed and realized by an FPGA platform. Both visual stimulus paradigms are presented to the user in real time.
P300 refers to an endogenous ERP component that appears 300-400 ms after the onset of stimulation, usually evoked by the Oddball paradigm. The Oddball paradigm refers to ERP induced by either bias stimulation or target stimulation (small probability stimulation, generally with a target stimulation probability of 10-30%) in a sequence of standard stimulation (large probability stimulation). The smaller the probability of occurrence of a bias stimulus or target stimulus, the larger the amplitude of evoked P300. The P300-Speller is an important man-machine interface paradigm for realizing character selection input by utilizing electroencephalogram signals, can realize the purpose of direct conversation between a patient and the outside, can effectively improve the life quality of paralyzed patients, is convenient for clinical application, has the advantages of stable characteristics, no need of training and the like, and therefore shows that the P300-Speller has a good application prospect.
In the study of brain-computer interfaces, steady-state visual evoked potentials are one of the most common and most effective modes. The method does not need to train the tested object, and the experiment is simple, convenient and easy to operate. The high signal-to-noise ratio is provided, the strong SSVEP signal can be recorded on the scalp, the number of required electrodes is very small, enough information can be acquired by one or two electrodes, and the high operability is realized. Based on the advantages, the deep research on the SSVEP is helpful for people to know the brain more clearly, realize real human-computer interaction and have strong theoretical and application values.
102: extracting electroencephalogram data of a subject;
the completion of the hybrid brain-computer interface first requires real-time acquisition of the brain electrical data. The electroencephalogram data acquisition adopts a 40-lead NuAmp electroencephalogram amplifier of Neuroscan company, and acquires 6-lead electroencephalogram signals: fz, Cz, Pz, Oz, P7 and P8, arranged according to the International 10-20 system. All the lead brain electrical signals take the right mastoid as reference and the left mastoid as ground, and the impedance value is below 5K. The subject sits quietly on a chair about 60cm from the screen, gazes the corresponding visual stimulus paradigm and performs an autonomous biting action to complete the web browsing operation. The method utilizes a TCP/IP protocol provided by Scan4.5 software to connect BCI2000 with acquisition software, realizes real-time acquisition and reading of electroencephalogram data through a FieldTrip toolkit, lays a foundation for subsequent online data processing, and a specific data acquisition schematic diagram is shown in figure 3.
103: setting related parameters, reading electroencephalogram data, preprocessing the electroencephalogram data, extracting features and identifying modes to obtain a final mode identification result;
fig. 4 shows the extraction and processing flow of the on-line electroencephalogram data. The whole processing flow totally comprises 4 user states, which are respectively as follows: cursor movement state, character spelling state, mouse click state, and idle state. The cursor movement state is a state for detecting and judging that a user is watching the SSVEP flashing light paradigm, and then converting a final mode recognition result into a control instruction of cursor movement; the spelling state of the character is to detect and judge that a user opens a P300 mode, and watch a target character (a character expected to be output by the user and the character expected to be output by the user in the process) in an Oddball paradigm, and then a final mode recognition result is converted into a control instruction input by a keyboard; the mouse click state is that the long-term occlusion action (the long-term occlusion action refers to the occlusion action with longer duration and generally longer than 600 ms) executed by a user is detected and judged, and then the final mode recognition result is converted into a control instruction for clicking the left mouse button; the idle state is to determine that the user is in the idle state when any information characteristic is not detected from the data of the user, and to not output any result and not execute any operation when the user is in the idle state. The user state is analyzed in the design data processing process, the sequence is judged, and the data processing flow is as follows:
1) firstly, parameters (such as the position of a cache file of electroencephalogram data, flashing frequency, trained classifier data and the like) are called.
2) Reading real-time electroencephalogram data, and formally entering a data processing stage from the second. When the length of the new data packet reaches 1s, data processing analysis is performed once, namely, data processing analysis is performed once every second.
3) When data is processed, firstly, the cursor movement state is judged according to the sequence, the electroencephalogram data 2s before the current moment is intercepted, typical correlation analysis is carried out, and the maximum typical correlation coefficient is obtained (Gao et al, Qinghua university propose a CCA-based frequency identification method using multi-channel signals, which is used for extracting and identifying the frequency characteristics of SSVEP[3]That is, a Cvalue value in fig. 4 is obtained by feature extraction), and is accumulated with the maximum typical correlation coefficient obtained from the previous 1s data, after 3 times of accumulation, the accumulated result is compared with a set threshold (determined by the result of offline analysis), and if the accumulated result is greater than the threshold, the result of 3 times is counted (the maximum typical correlation coefficient and the corresponding target frequency are obtained in each time of typical correlation analysis, and the target frequency refers to the target frequency of the userThe frequency of the staring flashing lights) and makes pattern recognition (the pattern recognition result is the one with more target frequencies in the 3 times of results).
CCA analyzed two sets of variables: one set is multichannel brain electrical signals recorded from a certain source region and is recorded as x (t), and the other set is stimulation signals. Each flashing light stimulus of the SSVEP flashes at a certain frequency, which is also triggered by an electrical signal of a certain frequency (a square wave of fixed period). It is also known that periodic signals can be decomposed into a set of fourier series, i.e. a stimulus signal in the form of a square wave, denoted y (t), whose period is fixed (frequency f), and thus can be decomposed into fourier series of f and its harmonics (sin (2 π ft), cos (2 π ft), sin (4 π ft), cos (4 π ft) … …), as in the formula
y ( t ) = y 1 ( t ) y 2 ( t ) y 3 ( t ) y 4 ( t ) y 5 ( t ) y 6 ( t ) = sin ( 2 πft ) cos ( 2 πft ) sin ( 4 πft ) cos ( 4 πft ) sin ( 6 πft ) cos ( 6 πft ) , t = 1 S , 2 S , . . . , T S - - - ( 1 )
Where f is the fundamental frequency, T is the number of data sample points, and S is the sample rate of the signal. Fig. 5 illustrates how the brain electrical signals are analyzed using the CCA algorithm. Since the brain dynamically behaves as a low-pass filter, which results in some high-frequency components of the square wave signal being filtered out, low-frequency fundamental and harmonics (the fundamental and 6 first and second harmonics are used in the above formula) are generally used. The CCA may be used as a feature extraction method for SSVEP detection and identification to be satisfied on the assumption that SSVEP output as a brain electrical activity response and a system output as a stimulation signal are linear systems, that is, the SSVEP response contains frequency components consistent with the stimulation signal. In the algorithm, by calculating typical correlation coefficients of the electroencephalogram signal and all frequency stimulation in the system, the frequency corresponding to the maximum coefficient is the frequency of the SSVEP and is also the frequency of the flashing light stimulation.
The core problem of the brain-computer interface system based on the SSVEP is to detect the frequency of the SSVEP component in the tested brain electrical signal. Suppose there are K stimulation frequencies f1,f2,…,fKAnd N leads of L seconds of electroencephalogram data. If the stimulation frequency is recorded as fSSatisfy the following requirements
fS=maxfρ(f),f=f1,f2,…,fK(2)
Where ρ (f) is a typical correlation coefficient between x (electroencephalogram signal) and y (stimulus signal) (as shown in equation 2).
4) And if the threshold value is not larger than the threshold value, judging the occlusion operation. Processing and analyzing the later 1s data of the intercepted 2s data to perform time domain analysis of the occlusion operation, extracting time domain characteristics and judging, if judging as the long-term occlusion operation, entering a mouse clicking state, and sending a mode identification result; if the short-time occlusion operation is performed, entering a pre-opening mode of a character spelling state; if the non-occlusion operation is judged, the device is in an idle state.
Occlusion evoked scalp myoelectricity (OES-ME) is a myoelectric artifact collected at the position of the scalp by the movement of masticatory muscles during the occlusion action. The occlusion induced scalp electromyography has the characteristics of large amplitude, easiness in distinguishing and the like as one of the noises of common electroencephalogram signals, and because the occlusion induced scalp electromyography and the electroencephalogram signals have larger difference and are easier to identify than the electroencephalogram signals, the occlusion induced scalp electromyography and other electromyography signals can be used as a supplementary input mode of a man-machine interface which takes the electroencephalogram as a main input mode and used for triggering certain frequently-executed instruction sets so as to increase the response speed and accuracy of the system and improve the user friendliness of the system. However, the addition of the occlusion induced scalp myoelectricity may cause some problems, and the occlusion induced scalp myoelectricity as one of myoelectricity signals may interfere the acquisition of the electroencephalogram signals in the acquisition process of the electroencephalogram signals and affect the stability of the system. The snap action has strong temporal characteristics. And extracting the duration characteristic of the signal by a time domain analysis method, defining the duration to be more than 30ms, and judging the signal to be occlusion operation, otherwise, judging the signal to be non-occlusion action. And defining the occlusion operation with the duration longer than 600ms as long-time occlusion operation, and defining the occlusion operation with the duration shorter than 600ms as short-time occlusion operation.
The user needs to do short-time snap action before spelling the character, namely, the user can enter the spelling state only when detecting the short-time snap action, however, the short-time snap action can be performedMisoperation is easy to occur. Therefore, in order to prevent state misjudgment caused by misoperation, a character spelling pre-opening mode is set, namely after short-time occlusion action is detected, whether all labels in the data of the next 5 seconds contain initial labels or not is checked. If not, skipping the state and directly judging the state as an idle state; if so, continuing to wait for the detection of the end tag. When all the EEG signals are detected, intercepting the EEG data between the start label and the end label to perform P300 processing (including preprocessing, feature extraction and pattern recognition (Fisher linear discriminant analysis)[4]) The judged result is fed back to the user through voice prompt, if the result is the same as the expected result of the user, the result is output only by executing the occlusion operation within 3s, namely whether the occlusion operation exists within 3s is judged, and if the occlusion operation exists, the mode identification result is output; otherwise, the judgment is wrong, and the pattern recognition result is not output.
The acquired electroencephalogram data includes an electroencephalogram signal and a tag (also called an event code) and a location of the tag (corresponding to the electroencephalogram signal). The label is data which represents the line information and the stimulation starting information and is sent to the electroencephalogram amplifier by an Eprime software program through a parallel port while the Oddball line-column paradigm is displayed. For example, each stimulation start will prompt a short-time snap operation on a screen, and after lasting 3s, a start tag is sent to represent the start of a new round of character spelling task, that is, if the user desires to spell a character, the user should look at the corresponding character in the oddball paradigm. During stimulation, each row and column sends a label representing row and column information after stimulation, and if the fifth row is bright, the sending label is 5; when column 2 is lit, the transmission label is 8. An end tag is also sent at the end of a round of stimulation, representing the end of the round. Because the Eprime software program sends the label data to the electroencephalogram amplifier through the parallel port, the amplifier can synchronously integrate the label data with the electroencephalogram data in real time, so that the label position corresponds to the signal in real time, and the electroencephalogram signal under one corresponding round of stimulation can be intercepted through the starting label and the ending label, so that the analysis and the processing are carried out.
Fisher linear discriminant analysis is generally applicable to pattern recognition of two types of samples. For the case of two-class classification, the optimal classification effect is to make the projected two-class one-dimensional sample characteristics satisfy the maximum inter-class distance and the minimum intra-class distance, thereby satisfying the linear separability of the two-class samples to the maximum extent.
The Fisher criterion function is the ratio of the inter-class dispersion to the intra-class dispersion, and is defined as:
wherein two types of samples W are assumed1And W2The corresponding number of samples is N1And N2. Definition of mu1And mu2The mean value of the two types of original samples is defined as the mean value of the one-dimensional characteristic data after the sample projectionAndthe dispersion of the two types of samples after projection is defined asAndthe intra-class dispersion of the two types of samples is SWAnd an inter-class dispersion of Sb. The maximum dispersion between the classes and the minimum dispersion in the classes are met, namely when the Fisher function takes the maximum value, the classification effect is optimal, namely the value omega of the variable parameter omega at the moment*For the best projection vector, ωTIs the transpose of ω. Calculating the differential of the formula to obtain 0 and deriving to obtain the optimal projection vector omega of the second classification*
For P300 pattern recognition in the design, all data samples are divided into two types, namely electroencephalogram signals under target and non-target stimulation. The target stimulus is a stimulus state when a row and a column where a character desired to be output by a user are lit, and the stimulus states when the remaining rows and columns are lit are non-target stimuli.
5) And after the result of one round is output, continuing to judge the state of the next round and output the result.
104: and converting the final mode recognition result into a control instruction, and executing the control instruction to complete a specific task (web browsing).
The whole operation of the step is completed under the platform framework of the MFC, and two functions are mainly realized: serial communication and instruction control conversion. And the final mode recognition result is transmitted and received through the serial port, and is converted into a corresponding control instruction. The control instructions include a P300 control instruction, a SSVEP control instruction, and an OES-ME control instruction. The execution of the control command can be directly displayed on a computer screen or be a voice prompt. Such as movement of a cursor on the screen (by changing the current cursor position); calling a keyboard operation program to simulate keyboard operation and realize the input of characters; and calling a mouse action event to simulate mouse operation (clicking a left mouse button) to realize the function of confirming clicking.
Reference to the literature
[1]Farwell L.A.,Donchin E.,Talking off the top of your head:A mental prosthesis utilizingevent-related brain potentials,Electroencephalogr.Clin.Neurophysiol.,1988 70:510–523.
[2]Vialatte F.B.,Maurice M.,Dauwels J.,et al. Steady-state visually evoked potentials:focuson essential paradigms and future perspectives.Progress In Neurobiology,2010,90(4):418~438.
[3]Lin Z.,Zhang C.,Wu W.et al.,Frequency recognition based on canonical correlationanalysis for SSVEP-based BCIs.IEEE Trans.Biomed.Eng.,2007,54(6):1172~1176.
[4] Sun great wall. visual P300-Speller induced ERP study based on three-dimensional encoded stimulus sequences: [ Master academic thesis ], Tianjin; tianjin university, 2011.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (2)

1. A three-mode serial brain-computer interface method based on multi-information fusion is characterized by comprising the following steps:
(1) stimulating the testee by adopting two visual stimulation paradigms;
(2) extracting electroencephalogram data of a subject;
(3) setting related parameters, reading electroencephalogram data, preprocessing the electroencephalogram data, extracting features and identifying modes to obtain a final mode identification result;
(4) converting the final mode recognition result into a control instruction, and completing a specific task by executing the control instruction;
the steps of setting relevant parameters, reading electroencephalogram data, preprocessing the electroencephalogram data, extracting features and identifying modes, and obtaining a final mode identification result specifically comprise:
1) firstly, calling parameters;
2) reading real-time electroencephalogram data, and formally entering a data processing stage from the second;
3) when data is processed, firstly, judging the cursor moving state according to the sequence, intercepting electroencephalogram data 2s before the current time, performing typical correlation analysis to obtain a maximum typical correlation coefficient, accumulating the maximum typical correlation coefficient with the maximum typical correlation coefficient obtained by the data 1s, comparing an accumulated result with a set threshold value after accumulating for 3 times, counting the result 3 times if the accumulated result is greater than the threshold value, and sending a mode identification result after mode identification is carried out;
4) if the time domain characteristics are not larger than the threshold value, judging the occlusion operation, processing and analyzing the last 1s data of the intercepted 2s data, performing time domain analysis of the occlusion operation, extracting the time domain characteristics, judging, if the time domain characteristics are judged to be the long-term occlusion operation, indicating that the mouse enters a mouse clicking state, and sending a mode identification result; if the short-time occlusion operation is performed, entering a pre-opening mode of a character spelling state; if judging that no occlusion operation exists, representing that the mobile terminal is in an idle state;
the long-time-range occlusion operation is that the occlusion duration is longer than 600 ms; the short-time meshing operation is that the meshing duration is less than 600 ms;
5) after the result of one round is output, continuing the judgment of the state of the next round and the result output;
wherein,
the cursor movement state is to detect and judge that the user is watching the SSVEP flashing light paradigm, and then convert the final pattern recognition result into a control instruction of cursor movement;
the spelling state of the character is to detect and judge that a user starts a P300 mode, and watch a target character in an Oddball paradigm, so that a final mode recognition result is converted into a control instruction input by a keyboard;
the mouse clicking state is used for detecting and judging that a user executes long-term occlusion action, and then a final mode recognition result is converted into a control instruction for clicking a left mouse button;
the idle state is that when no information characteristic is detected from the data of the user, the user is judged to be in the idle state, and when the idle state is detected, no result is output and no operation is executed.
2. The multi-information fusion-based three-modality serial brain-computer interface method according to claim 1, wherein the step of extracting the electroencephalogram data of the subject specifically comprises:
the BCI2000 is connected with acquisition software by using a TCP/IP protocol provided by Scan4.5 software, and the real-time acquisition and reading of electroencephalogram data are realized by a FieldTrip tool package.
CN201310722162.1A 2013-12-18 2013-12-18 A kind of three mode serial brain-computer interface methods based on Multi-information acquisition Active CN103699226B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310722162.1A CN103699226B (en) 2013-12-18 2013-12-18 A kind of three mode serial brain-computer interface methods based on Multi-information acquisition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310722162.1A CN103699226B (en) 2013-12-18 2013-12-18 A kind of three mode serial brain-computer interface methods based on Multi-information acquisition

Publications (2)

Publication Number Publication Date
CN103699226A CN103699226A (en) 2014-04-02
CN103699226B true CN103699226B (en) 2016-08-24

Family

ID=50360779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310722162.1A Active CN103699226B (en) 2013-12-18 2013-12-18 A kind of three mode serial brain-computer interface methods based on Multi-information acquisition

Country Status (1)

Country Link
CN (1) CN103699226B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090653B (en) * 2014-06-16 2017-02-22 华南理工大学 Detecting method for multi-modal brain switch based on SSVEP and P300
FR3024569B1 (en) * 2014-07-29 2016-08-26 Commissariat Energie Atomique METHOD FOR LOCATING BRAIN ACTIVITY ASSOCIATED WITH A TASK
FR3025037B1 (en) * 2014-08-22 2016-09-30 Commissariat Energie Atomique METHOD FOR LOCATING BRAIN ACTIVITY ASSOCIATED WITH A TASK
CN104317388B (en) * 2014-09-15 2018-12-14 联想(北京)有限公司 A kind of exchange method and wearable electronic equipment
CN104536572B (en) * 2014-12-30 2017-12-05 天津大学 It is a kind of based on event related potential across the universal brain-machine interface method of individual
CN104850230B (en) * 2015-05-26 2018-02-09 福州大学 The brain-computer interface control method of simulating keyboard mouse
CN105528072A (en) * 2015-12-02 2016-04-27 天津大学 Brain-computer interface speller by utilization of dynamic stop strategy
CN105511620A (en) * 2015-12-08 2016-04-20 北京小鸟看看科技有限公司 Chinese three-dimensional input device, head-wearing device and Chinese three-dimensional input method
CN105617506A (en) * 2016-01-26 2016-06-01 王焕霞 3D (Three-Dimensional) brainwave synchronization nursing instrument
CN106571075A (en) * 2016-10-18 2017-04-19 广东工业大学 Multi-mode language rehabilitation and learning system
CN106569604B (en) * 2016-11-04 2019-09-17 天津大学 Audiovisual bimodal semantic matches and semantic mismatch collaboration stimulation brain-machine interface method
CN107066940B (en) * 2017-02-27 2020-09-11 广东工业大学 MRP electroencephalogram signal feature extraction method and device based on correlation coefficient
CN107212883B (en) * 2017-05-24 2019-10-18 天津理工大学 A kind of mechanical arm writing device and control method based on brain electric control
CN108388846B (en) * 2018-02-05 2021-06-08 西安电子科技大学 Electroencephalogram alpha wave detection and identification method based on canonical correlation analysis
CN109766751B (en) * 2018-11-28 2022-02-01 西安电子科技大学 Steady-state vision-evoked electroencephalogram identity recognition method and system based on frequency domain coding
CN110221684A (en) * 2019-03-01 2019-09-10 Oppo广东移动通信有限公司 Apparatus control method, system, electronic device and computer readable storage medium
CN111967333B (en) * 2020-07-20 2023-04-07 中国人民解放军军事科学院国防科技创新研究院 Signal generation method, system, storage medium and brain-computer interface spelling device
CN113110738A (en) * 2021-04-02 2021-07-13 天津理工大学 Multi-mode electroencephalogram signal detection method based on threshold discrimination method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776572A (en) * 2005-12-08 2006-05-24 清华大学 Computer man-machine interacting method based on steady-state vision induced brain wave
CN1949139A (en) * 2006-11-08 2007-04-18 天津大学 Brain-machine interface mouse controlling device
CN102609090A (en) * 2012-01-16 2012-07-25 中国人民解放军国防科学技术大学 Electrocerebral time-frequency component dual positioning normal form quick character input method
CN102778949A (en) * 2012-06-14 2012-11-14 天津大学 Brain-computer interface method based on SSVEP (Steady State Visual Evoked Potential) blocking and P300 bicharacteristics
CN102799267A (en) * 2012-06-29 2012-11-28 天津大学 Multi-brain-computer interface method for three characteristics of SSVEP (Steady State Visual Evoked Potential), blocking and P300

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1776572A (en) * 2005-12-08 2006-05-24 清华大学 Computer man-machine interacting method based on steady-state vision induced brain wave
CN1949139A (en) * 2006-11-08 2007-04-18 天津大学 Brain-machine interface mouse controlling device
CN102609090A (en) * 2012-01-16 2012-07-25 中国人民解放军国防科学技术大学 Electrocerebral time-frequency component dual positioning normal form quick character input method
CN102778949A (en) * 2012-06-14 2012-11-14 天津大学 Brain-computer interface method based on SSVEP (Steady State Visual Evoked Potential) blocking and P300 bicharacteristics
CN102799267A (en) * 2012-06-29 2012-11-28 天津大学 Multi-brain-computer interface method for three characteristics of SSVEP (Steady State Visual Evoked Potential), blocking and P300

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Frequency Recognition Based on Canonical Correlation Analysis for SSVEP-Based BCIs;Zhonglin Lin等;《Biomedical Engineering》;20070521;第54卷(第6期);1172-1176 *
一种基于P300和SSVEP模式的新型融合诱发范式;王旻珏等;《Proceedings of the 32nd Chinese Control Conference》;20130729;3668-3672页 *
基于SSVEP阻断与P300特征的混合范式脑-机接口;徐敏鹏;《电子学报》;20131130;第41卷(第11期);2247-2251页 *

Also Published As

Publication number Publication date
CN103699226A (en) 2014-04-02

Similar Documents

Publication Publication Date Title
CN103699226B (en) A kind of three mode serial brain-computer interface methods based on Multi-information acquisition
Zhou et al. A hybrid asynchronous brain-computer interface combining SSVEP and EOG signals
CN101464728B (en) Human-machine interaction method with vision movement related neural signal as carrier
CN100366215C (en) Control method and system and sense organs test method and system based on electrical steady induced response
CN101339455B (en) Brain machine interface system based on human face recognition specific wave N170 component
WO2018094720A1 (en) Clinical electroencephalogram signal-based brain-machine interface system for controlling robotic hand movement and application thereof
CN102799267B (en) Multi-brain-computer interface method for three characteristics of SSVEP (Steady State Visual Evoked Potential), blocking and P300
CN107981997B (en) A kind of method for controlling intelligent wheelchair and system based on human brain motion intention
CN109582131B (en) Asynchronous hybrid brain-computer interface method
CN103699217A (en) Two-dimensional cursor motion control system and method based on motor imagery and steady-state visual evoked potential
Jin et al. P300 Chinese input system based on Bayesian LDA
CN113208593A (en) Multi-modal physiological signal emotion classification method based on correlation dynamic fusion
CN111930238A (en) Brain-computer interface system implementation method and device based on dynamic SSVEP (secure Shell-and-Play) paradigm
CN110262658B (en) Brain-computer interface character input system based on enhanced attention and implementation method
CN105929937A (en) Mobile phone music playing system based on steady-state visual evoked potential (SSVEP)
Gong et al. An idle state-detecting method based on transient visual evoked potentials for an asynchronous ERP-based BCI
CN117918863A (en) Method and system for processing brain electrical signal real-time artifacts and extracting features
CN110472595B (en) Electroencephalogram recognition model construction method and device and recognition method and device
CN116360600A (en) Space positioning system based on steady-state visual evoked potential
CN204759349U (en) Aircraft controlling means based on stable state vision evoked potential
CN112140113B (en) Robot control system and control method based on brain-computer interface
KR101034875B1 (en) Intention reasoning method using pattern of brain waves
CN112070141A (en) SSVEP asynchronous classification method fused with attention detection
Park et al. Application of EEG for multimodal human-machine interface
CN103300849A (en) Electroencephalogram signal processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant