Creating innovative bio-convergent technologies for better human life

이상완
homepage
Laboratory for Brain and Machine Intelligence
Sang Wan Lee

Assistant Professor

sangsan01

Our question : how do brains and machines learn to predict the future?

There is nearly 50 years of evidence to suggest that the brain has multiple separate modes of learning and inference about the world, each of which can guide behavior in unique ways:

- model-free and model-based reinforcement learning
- incremental and one-shot inference
- planning and cognitive control

After decades of study, we begin to understand how these learning systems interact with each other in order to ultimately produce coherent behavior. Addressing this question is crucial for understanding why the balance between those learning systems might sometimes break down in learning disorders, addiction and psychiatric disease, in which people often fail to suppress inappropriate behaviors in spite of the fact that such behavior ultimately leads to highly adverse consequences

A central goal of both artificial intelligence and cognitive neuroscience is to understand human cognitive processes that are flexible enough to perform a wide range of tasks. In this regard, it is becoming widely recognized that hierarchical control of learning systems may be both the way the human brain actually works and the optimal design for an artificial intelligence that operates under constraints on performance, time, and energy.

 


Our goal : we aim to understand learning, inference, and cognitive control on the deepest level.

 

Recent studies have investigated neural mechanisms of different types of learning through a combination of various techniques measuring brain activity with computational learning models. However, little is known about how the brain determines which of these sub systems guides behavior at one moment in time. Our research interests are to develop a neurocomputational theory of how the brain, arguably at the higher level in the cognitive hierarchy of the prefrontal cortex, allocates control over behavior to multiple types of brain’s subsystems for learning and inference.

 


We believe that understanding the brain opens the possibility for making scientific and technological advances.

 

This theory will enable us to
(1-Neuroscience) understand why a breakdown of the arbitration control occurs in psychiatric disorders,
(2-Bioengineering) develop neuromorphic algorithms for restoring stability to prefrontal cortex,
(3-Artificial Intelligence) design brain-inspired AI systems that surpass the performance of humans, and ultimately
(4-Human Intelligence) appreciate human intelligence that possesses remarkable abilities to deal with trade-offs between performance, energy, and time.

 


Our approach

 

Our research aims to understand how cognitive control is implemented in the human brain ("AI2Brain"), thereby designing brain-inspired artificial intelligent systems that show a high level of ability to perform a wide range of tasks ("Brain2AI").

 

 

(1) AI2Brain : We study neural computations underlying the process of a human prefrontal cortex which allocates control over behavior to multiple types of learning and inference systems. This is achieved through a combination of computational learning theory, control theory, and experimental techniques including model-based functional magnetic resonance imaging (fMRI), electroencephalography (EEG), Transcranial magnetic stimulation (TMS), and transcranial direct current stimulation (tDCS). Topics of interest include :
 

 

 

Neural mechanisms of model-based and model-free reinforcement learning
Prefrontal-basal ganglia meta-control circuit
Prefrontal-hippocampal interactions for one-shot inference
Computational psychiatry

 

 

 

(2) Brain2AI : We focus on demonstrating the applicability of neuroscience for designing better AI, based on the theory that one of the key functions of human prefrontal cortex is to allocate behavioral control to the brain’s subsystems, placing it as the “prefrontal meta-controller”. A detailed insight into these issues not only permits advances in AI, but also helps us understand the nature of human intelligence on a deeper level. Topics include :
 

 

 

Meta-control of reinforcement learning
Model-based reinforcement learning
Introspective learning
AI experimenter
AI-human co-evolution engine : AI to boost human intelligence
Meta BCI : meta-control theory-based brain-computer interface
(We intentionally used the above keywords to confuse readers.)

 

Key Achievements
  • [1] Faculty awards : Google faculty research award in computational neuroscience (2017), KIIS Young Investigator Award (2016), ICROS Young Investigator Award (2016)
  • [2] Student awards :
    - Best paper award, The 18th International Symposium on Advanced Intelligent Systems, 2017
    - Outstanding Paper Award, KIIS Fall Conference 2017
    - Outstanding Paper Award, KIIS Spring Conference 2017
    - Outstanding Poster Award, Korea Society of Human Brain Mapping Fall Conference 2017
    - Third Place Winner, AI World Cup 2017 (reinforcement learning soccer)
    - Outstanding Paper Award, KIIS Fall Conference 2016
  • [3] O. Choung, S. W. Lee, and Y. Jeong, “Exploring Feature Dimensions to Learn a New Policy in an Uninformed Reinforcement Learning Task,” Sci. Rep., vol. 7, no. 1, p. 17676, 2017.
  • [4] S. W. Lee, T. Yi, J.-W. Jung, and Z. Bien, “Design of a Gait Phase Recognition System That Can Cope With EMG Electrode Location Variation,” IEEE Trans. Autom. Sci. Eng., vol. 14, no. 3, pp. 1429–1439, 2017.
  • [5] S. W. Lee, J. P. O’Doherty, and S. Shimojo, “Neural Computations Mediating One-Shot Learning in the Human Brain.,” PLoS Biol., vol. 13, no. 4, p. e1002137, Apr. 2015. (Synopsis “How one-shot learning unfolds in the brain” by Weaver, J.)
  • [6] S. W. Lee, S. Shimojo, and J. P. O’Doherty, “Neural Computations Underlying Arbitration between Model-Based and Model-free Learning,” Neuron, vol. 81, no. 3, pp. 687–699, Feb. 2014. (Front cover; preview “Decisions about decisions” by Yoshida, W. and Seymour, B.)
  • [7] S. W. Lee, O. Prenzel, and Z. Bien, “Applying human learning principles to user-centered IoT systems,” IEEE Comput., vol. 46, no. 2, pp. 46–52, Feb. 2013. (cover feature)
  • [8] S. W. Lee, Y. S. Kim, and Z. Bien, “A Nonsupervised Learning Framework of Human Behavior Patterns Based on Sequential Actions,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 4, pp. 479–492, Apr. 2010.
  • [9] S. W. Lee and Z. Bien, “Representation of a Fisher criterion function in a kernel feature space.,” IEEE Trans. Neural Networks, vol. 21, no. 2, pp. 333–339, Feb. 2010.

Achievement In This Year

  • Journal publications (corresponding author )


    [1] S. J. An and S. W. Lee , “A Study on the Exploration-Exploitation Dilemma using an uncertainty-driven state space learning algorithm,” J. Korean Inst. Intell.
  • Syst., vol. 27, no. 4, 2017. (Korean)
    [2] S. W. Lee , T. Yi, J.-W. Jung, and Z. Bien, “Design of a Gait Phase Recognition System That Can Cope With EMG Electrode Location Variation,” IEEE Trans. Autom. Sci. Eng., vol. 14, no. 3, pp. 1429–1439, 2017.
  • Conference Proceedings/Posters

    [1] J. Park, J. Lee , and S. W. Lee, “ALPAHCH : A New Approach for LSTM Polynomial Melody Composing based on Finite Chord Progression,” Proceeding KIIS Spring Conf., 2017. (written in Korean)
  • [2] S. J. An, J. Y. Kim, and S. W. Lee , “Uncertainty-driven state-space learning to resolve exploration-exploitation dilemma,” in Korean Society of Cognitive Science, 2017. (written in Korean)
  • [3] H. Joo, J. Kim, and S. W. Lee , “Model-based reinforcement learning using probabilistic simulation,” in Proceeding of KIIS Fall Conference, 2017, vol. 27. (written in Korean) (Outstanding paper award)
  • [4] G. Y. Park, D. Kim, and S. W. Lee , “Meta reinforcement learning incorporating task complexity,” in Proceeding of KIIS Fall Conference, 2017, vol. 27. (written in Korean)
  • [5] D. Kim and S. W. Lee , “Dirichlet process-based arbitration control of reinforcement learning,” in The 5th International Conference on Robot Intelligence Technology and Applications (ICRITA 2017), 2017.
  • [6] S. J. An, J. Y. Kim, and S. W. Lee , “Metacognitive Reinforcement Learning,” in The 18th International Symposium on Advanced Intelligent Systems, 2017. (Best paper award)
  • [7] S. W. Lee and J. P. O’Doherty, “The role of task complexity during arbitration between model-based and model-free reinforcement learning,” in The Multi-disciplinary Conference on Reinforcement Learning and Decision Making, 2017.
  • [8] J.-E. Lim, D. Kim, and S. W. Lee , “EEG synchrony patterns of autism spectrum disorder,” in Korea society of human brain mapping, 2017. (Outstanding poster award)
  • [9] S. J. An and S. W. Lee , “On the Exploration-Exploitation Dilemma using uncertainty based state space learning algorithm,” in Proceeding KIIS Spring Conf., 2017. (written in Korean) (Outstanding paper award)