ИСТИНА |
Войти в систему Регистрация |
|
Интеллектуальная Система Тематического Исследования НАукометрических данных |
||
Brain-computer interfacing is often believed to be a mean for the augmentation of brain function. A well-known technology entrepreneur Elon Musk even proposed recently to save humans from enslaving by machines by closely connecting human brains and computers using advanced brain-computer interfaces (BCIs) (CNBC, 2017). However, due to the elusive nature of intentions (Uithol et al., 2014) and non-deterministic nature of BCIs it is often unclear who is responsible for an action, a human or an artificial intellectual system supposed to implement human’s intentions (Haselager, 2013). This problem appears even under slow interaction rates (Haselager, 2013). With an increase of interaction speed required for pronounced augmentation, most of the system’s actions would be responses to pre-conscious phenomena, such as early physiological phenomena of intention formation: but how can we be sure that a BCI has indeed recognized such an early, hidden intention and not just made a recognition error? Moreover, conscious control of artificial system’s interpretation of human’s intentions is limited by the inherent limitations of consciousness, such as working memory capacity. Finally, multiple and even conflicting wishes or urges can correspond to a single possible action, and it is not possible to determine the will of a person until an intention is formed, as only intentions can be mapped one-to-one to planned actions (Razeev, 2017). Thus, limitations of consciousness and volition pose serious obstacles for BCI-based brain augmentation. On the other hand, it might be interesting to consider both consciousness and volition as a goal for brain augmentation based on optimization of human-machine interaction. One possible approach to unveiling their potential could be reducing the load of routine tasks which may claim some of their resources. A promising variant of this approach is the development of highly responsive (Jacob, 1991) or even “noncommand” (Nielsen, 1993; Jacob, 1993) interfaces. Such interfaces can be built based on gaze interaction technologies, under condition of solving the “Midas touch problem”, the inability of such technology to differentiate intended and spontaneous eye movements, which leads to frequent false activations of an interface by the spontaneous gaze behavior; this problem is the more severe the more responsive is an interface (Jacob, 1991). This problem might be defeated with passive brain-computer interfaces (Zander, Kothe, 2011), e.g. by on-fly classification of gaze dwells into command-related and spontaneous ones (Protzak et al., 2013; Shishkin et al., 2016). However, due to involvement of a passive interface, the problem of conscious control remains crucial even in this case. Although practically useful solutions are yet to be found, human-machine systems that includes BCI and gaze based noncommand interaction already provides promising instrumentation for experimental studies of conscious and unconscious processes in emerging human-machine systems. This can be illustrated with our experience from online experiments with expectation-based hybrid eye-brain-computer interfaces (EBCIs; Nuzhdin et al., 2017), single-trial P300 BCI (Ganin et al., 2013), eye-gaze based interaction (Velichkovsky, 1995; Isachenko et al., 2018) and ultrafast passive-active movements made with the assistance of special experimental exoskeletons (Dubynin, Shishkin, 2017; Dubynin et al., in prep.), and also the results of other groups’ online studies of unconscious brain signal conditioning (Kaplan et al., 2005; Ramot et al., 2016) and vetoing of action (Schultze-Kraft et al., 2016). This experience suggests that human-machine systems that includes BCI and gaze based noncommand interaction provides promising instrumentation for experimental studies of conscious and unconscious processes in emerging human-machine systems. Acknowledgements This work was supported by the grant 18-19-00593 from the Russian Science Foundation. References 1. CNBC, 13 Feb 2017, https://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html 2. S. Uithol, D.C. Burnston, and P. Haselager, Neuropsychologia, 2014, 56(1), 129-139. 3. P. Haselager, Minds and Machines, 2013, 23(3), 405-418. 4. D.N. Razeev, Zh. Vyssh. Nervn. Deyat., 2017, 67(6), 721-727 (in Russian). 5. R.J. Jacob, ACM Transactions on Information Systems (TOIS), 1991, 9(2), 152-169. 6. J. Nielsen, Communications of the ACM, 1993, 36(4), 83-99. 7. R.J. Jacob, Advances in human-computer interaction, 1993, 4, 151-190. 8. T.O. Zander, C. Kothe, J. Neural Eng., 2011, 8:025005. 9. J. Protzak, K. Ihme, T.O. Zander, UAHCI, 2013, 662-671. 10. S.L. Shishkin, Y.O. Nuzhdin, E.P. Svirin et al., Front. Neurosci., 2016, 10:528. 11. Y.O. Nuzhdin, S.L. Shishkin, A.A. Fedorova et al., 7th Graz BCI Conf., 2017, 361-366. 12. I.P. Ganin, S.L. Shishkin, A.Y. Kaplan, PloS ONE, 2013, 8(10), e77755. 13. B.M. Velichkovsky, Pragmatics and Cognition, 1995, 3(2), 199-223. 14. A.V. Isachenko, D.G. Zhao, E.V. Melnichuk et al., IEEE SMC, 2018 (in press). 15. I.A. Dubynin, S.L. Shishkin, Psychology in Russia: State of the Art, 2017, 10, 40-56. 16. A.Y. Kaplan, J.G. Byeon, J.J. Lim et al., Int. J. Neurosci., 2005, 115(6), 781-802. 17. M. Ramot, S. Grossman, D. Friedman, R. Malach, PNAS, 2016, 113(17), E2413-E2420. 18. M. Schultze-Kraft, D. Birman, M. Rusconi et al., PNAS, 2016, 113(4), 1080-1085.