Engineering human intelligence by understanding how the brain learns (Brain↦AI↦Brain)
Challenges. Recent advances in machine learning have paved the way for developing human-level problem solving, demonstrating an ability to succeed in a few arduous tasks. However, engineering approaches to solve this problem differ in many ways from those in which the brains solve. It raises fundamental questions, including how the human brain handles a wide variety of tasks, how it learns from only a few observations, how it motivates itself to explore and define new goals, etc.
Approach. To address these questions, I have put together ideas from developing fields of machine learning and computational neuroscience. The approach is two-fold: 1) “Brain↦AI” aimed at understanding how the brain learns from a machine learning standpoint, and 2) “AI↦Brain” aimed at understanding why such neural processes occur (Fig. 1). This new idea has been explored from both AI/robotics and behavioral/neuroscience standpoints [Lee, Science Robotics 2019; Lee, Current Opinion in Behavioral Science 2019].
Aim. My research aims to establish fundamental principles of learning and inference at behavioral, computational, and neural levels. Specifically, my research team has focused on developing a computational theory of how the brain, arguably at the higher level in the cognitive hierarchy, coordinate multiple types of brain subsystems, each of which handles problems in different ways. I refer to this ability as "prefrontal meta control" (Fig. 1).
Impact. This theory will help us understand the nature of human problem solving (computational neuroscience), understand why and how a breakdown of these functions leads to psychiatric disorders (computational psychiatry), and develop machine learning models with human-like traits (neuroscience-inspired AI). Ultimately, detailed insight into these issues will enable us to understand human intelligence deeper.
For more information, visit our website https://aibrain.kaist.ac.kr/
- D. Kim, G. Y. Park, J. P. O’Doherty*, and S. W. Lee*, “Task complexity interacts with state-space uncertainty in the arbitration process between model-based and model-free reinforcement-learning at both behavioral and neural levels,” Nature Communications, 10, 5738, 2019.
- J. H. Lee, B. Seymour, J. Z. Leibo, S. J. Ah, S. W. Lee*, “Towards high performance, memory efficient, and fast reinforcement learning - lessons from decision neuroscience,” Science Robotics, vol. 4, no. 26, 2019.
- S. Weissengruber+, S. W. Lee+, John P. O'Doherty, Christian C. Ruff, “Neurostimulation reveals context-dependent arbitration between model-based and model-free reinforcement learning,” Cerebral Cortex, 2019 (+: co-first authors).
- O. Choung, S. W. Lee*, and Y. Jeong*, “Exploring Feature Dimensions to Learn a New Policy in an Uninformed Reinforcement Learning Task,” Scientific Reports, vol. 7, no. 1, p. 17676, 2017.
- S. W. Lee*, T. Yi, J.-W. Jung, and Z. Bien, “Design of a Gait Phase Recognition System That Can Cope With EMG Electrode Location Variation,” IEEE Trans. Autom. Sci. Eng., vol. 14, no. 3, pp. 1429–1439, 2017.
- S. W. Lee*, J. P. O’Doherty, and S. Shimojo, “Neural Computations Mediating One-Shot Learning in the Human Brain.,” PLoS Biol., vol. 13, no. 4, p. e1002137, Apr. 2015. (Synopsis “How one-shot learning unfolds in the brain” by Weaver, J.)
- S. W. Lee*, S. Shimojo, and J. P. O’Doherty, “Neural Computations Underlying Arbitration between Model-Based and Model-free Learning,” Neuron, vol. 81, no. 3, pp. 687–699, Feb. 2014. (Front cover; preview “Decisions about decisions” by Yoshida, W. and Seymour, B.)
- S. W. Lee*, O. Prenzel, and Z. Bien, “Applying human learning principles to user-centered IoT systems,” IEEE Comput., vol. 46, no. 2, pp. 46–52, Feb. 2013. (cover feature)
- S. W. Lee, Y. S. Kim, and Z. Bien, “A Nonsupervised Learning Framework of Human Behavior Patterns Based on Sequential Actions,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 4, pp. 479–492, Apr. 2010.
- S. W. Lee and Z. Bien, “Representation of a Fisher criterion function in a kernel feature space.,” IEEE Trans. Neural Networks, vol. 21, no. 2, pp. 333–339, Feb. 2010.