ИСТИНА |
Войти в систему Регистрация |
|
Интеллектуальная Система Тематического Исследования НАукометрических данных |
||
Brain-computer and brain-machine interfaces (BCIs, BMIs) are typically developed for helping paralyzed people, but they can be also considered in a broader perspective, as a mean to enhance human capacities by converting intentions into actions. Within this perspective, it makes sense to speak about mind-to-machine instead of brain-to-machine relation, to focus more on the intentions as the most accessible aspect of human internal activity for which the technology should respond, even if brain phenomena related to intentions are yet not sufficiently well defined. One interesting feature of the mind’s interaction with the physical world is that an intention to do something with an object is normally accompanied by a gaze fixation on it. A fixation typically starts well in advance of approaching the object (Land et al., 1999). This holds true for the use of graphical user interfaces (GUIs): clicks on screen buttons or on web links are also accompanied by gaze fixations on them. Currently, even cheap eye trackers are able to differentiate reliably between fixations on many screen buttons presented simultaneously. Assistive technologies based on specially adapted GUIs and eye tracking has been successfully used by some people with motor disabilities. Unfortunately, fixations accompanying intentions to act are similar to spontaneous fixations that subserve visual perception or accompany mind wandering. The users of the gaze-controlled interfaces need to make their intention-related fixations very long, or use additional confirming fixations, or use other tiresome means to indicate the intention-related fixations. Therefore, gaze based interaction is efficient only for certain types of tasks (e.g., typing) and is not appealing to healthy people. Note that intention-related fixations on GUI elements are themselves a very natural phenomenon that occurs without any special training. They would be a perfect tool to convert intentions into “clicks” on GUI elements if marked somehow so that they could not be mixed with the spontaneous fixations. Surprisingly, few attempts have been made so far to find markers for the intention-related fixations. B. B. Velichkovsky, Rumyancev and Morozov (2013) described a dependence of fixation duration and amplitude of the related saccades on the presence of attention and intention to successfully send a command through the interface, but only within a fixed class of tasks. Zander and his colleagues (Protzak et al., 2013), following the earlier proposal by Velichkovsky and Hansen (1996), classified interface-controlling and spontaneous fixations using the electroencephalogram (EEG) features. However, Zander et al. studied only long (1 s) fixations and used simplified experimental settings. It was not clear if the intention-related fixations could also be recognized in this way if the gaze control was applied to a more real-life task. To study the issue in more complex settings, we developed a gaze controlled computer game “EyeLines” and recorded EEG when the participants played it with their gaze only. Moves in the game were made in “control-on” mode of the game with fixations that exceeded 500 ms threshold. In the other, “control-off” mode, fixations of any length did not led to actions, and 500 ms or longer spontaneous fixations were collected. Hundreds of fixations of each type (spontaneous and controlling) were collected from each of 8 participants. A special procedure was developed to make sure that the analyzed EEG intervals were not contaminated by the artifacts related to eye movement. The EEG during controlling but not spontaneous fixations showed pronounced negativity in the posterior cortical areas starting early in the course of fixations (possibly, the Stimulus-Preceding Negativity, SPN). Using a simple feature extraction algorithm, greedy feature selection strategy and a linear classifier committee, we obtained, on average, about 35% true positive rate for controlling fixations while keeping the false alarm rate at about 10% on the test data with 5-fold cross-validation, much better than the random level. While the performance can be further improved by using more elaborated feature extraction algorithms and the use of EEG intervals preceding the saccade on the target location, the need to use single-trial classification of short EEG segments makes it unlikely that such a hybrid interface could ever demonstrate very high true positive rate values specific for the use of conventional computer mice, touchscreens, or gaze-only based control. However, a two-threshold strategy was developed to enable smooth interaction even with the current relatively low true positive rate. When a fixation exceeds the first, short (e.g., 500 ms) threshold, the BCI is applied to detect the intention to act on the fixated location. If the BCI misses the intention, the user still may issue the command by continuing fixating the same position until the second (e.g., 1000 ms) threshold is exceeded. Since spontaneous fixations of this length are rare, it is safe to execute a command at this time even without confirmation from the BCI; alternatively, a confirmation from the BCI can be required again but with a low BCI threshold. With such a strategy, the users may develop a more stable EEG pattern associated with controlling fixations, because this will lead to faster move execution. For the interaction with other people, anthropomorphic robots and computer avatars, this way of direct conversion of intentions into actions is not always natural (Velichkovsky, 1995). In this case, the “communicative” strategies based on “joint attention” gaze patterns can be used instead of direct control with fixations or saccades (Fedorova et al., 2015). Our results imply that the “eye-brain-computer” interfaces (EBCIs) can be helpful not only for severely paralyzed patients, as the typical BCIs, but also for healthy persons. Fast converting of intentions into computer actions without using any supplemental tasks (such as computer mouse manipulation, as well as special mental imagery or attention to external stimulation for activating a BCI) may make certain tasks involving interaction with computers especially fluent. This will open new perspectives for unfolding the full scale of benefits from augmenting brain function with the power of modern technologies. Parts of this work related to the methods of intention marker detection and their use in the EBCIs were supported by the Russian Science Foundation, grant RScF 14-28-00234. Fedorova A.A., Shishkin S.L., Nuzhdin Y.O., Velichkovsky B.M. (2015). Gaze based robot control: the communicative approach. In Proceedings of the 7th International IEEE/EMBS Conference on Neural Engineering (April 22-24 2015, Montpellier, France), p. 751-754. Land M., Mennie N., Rusted J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception 28, 1311-1328. Protzak J., Ihme K., and Zander T.O. (2013). A passive brain-computer interface for supporting gaze-based human-machine interaction. In Universal Access in Human-Computer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion, LNCS 8009, Springer, p. 662-671. Velichkovsky B.B., Rumyancev M.A., Morozov M.A. (2013). [A new approach to the Midas touch problem: identification of the gaze based commands using focal fixation detection.] The Herald of Moscow University (Vestnik Moskovskogo universiteta), ser. 14. 3, 33-45 (in Russian). Velichkovsky, B.M. (1995). Communicating attention: Gaze position transfer in cooperative problem solving. Pragmatics and Cognition, 3(2), 199-222. Velichkovsky B.M., Hansen J.P. (1996). New technological windows into mind: There is more in eyes and brains for human-computer interaction. In Proceedings of ACM CHI-96: Human factors in computing systems, p. 496-503. New York: ACM Press.