University of Houston

The Center for Low Carbon Energy & Subsurface Engineering is undergoing numerous experiments and research opportunities. Here are some of our most recent ongoing work.

Competition-dependent learning

How do stored memories change as a function of experience? What causes the neural patterns underlying memories to become stronger or weaker, more or less similar?

To address these questions, we explore the role of competitive dynamics in shaping learning. We hypothesize that when memories compete, high levels of memory activation lead to memory strengthening and integration of the competing neural patterns, whereas moderate levels of memory activation cause weakening of the memories and differentiation of the competing neural patterns.

becca-tapert-GnY_mW1Q6Xc-unsplash

To derive specific predictions, we have built neural network models that instantiate this learning principle, and we use them to simulate specific learning paradigms. In collaboration with the Turk-Browne Lab, we test these predictions using machine learning methods, applied to fMRI and EEG, to track the activation of specific memories as they compete, and to measure how neural patterns change as a result of competition. We have also started to develop real-time, closed-loop neurofeedback methods that foster moderate levels of memory activation, with the goal of actively driving memory weakening and representational change.

Sleep and memory

How does sleep contribute to learning?

A prevalent theory is that the hippocampus replays new memories to the cortex, thereby promoting consolidation of these memories. However, this cannot be the full story: Just as too much learning about an event during wake can damage related memories, unfettered replay of memories during sleep can also exert collateral damage on other, related memories. Thus, the computational challenge is not how to “hammer in” new memories—rather, this replay needs to be balanced during sleep with learning about other, related memories, in a manner that allows all of these (possibly conflicting) memories to be suitably reconciled.

sleep-and-memory-300x213

How is this reconciliation achieved? On the theoretical side, we are working with Anna Schapiro to develop a model of how different sleep stages work together to create a “playlist” for sleep-learning that contains both new, to-be-consolidated memories and other, related memories, where the strength of the memory in this playlist reflects the importance of the memory. We are also exploring different ways of operationalizing memory “importance.” On the experimental side, we are collaborating with the Cognitive Neuroscience Laboratory, led by Ken Paller, to use targeted memory reactivation methods where we link sounds to multiple memories during wake, and then we play these sounds to evoke memory competition during sleep. We are using EEG decoding to track competitive dynamics during sleep, and fMRI (pre- and post-sleep) to explore how competition during sleep transforms stored memories.

Event processing, context, and memory

How does memory work “in the wild,” outside of artificial lab studies? How do we segment and chunk continuous streams of information into meaningful events, and how do we store these events in a manner that allows them to be retrieved in the future to predict similar scenarios?

The vast majority of work on the neural basis of memory deals with discrete, predefined events (e.g., singly-presented static images). Conversely, there has been virtually no work looking at memory for dynamically changing real-life stimuli that have not already been broken up into pre-digested chunks.

For example, consider the well-accepted idea that the hippocampus stores “snapshots” of cortical activity. With standard memory experiments that present discrete items, it is abundantly clear when the hippocampus should take a snapshot and what information should be in the snapshot. However, it is still not clear how a continuous stream of information in real-life contexts (e.g. while watching a movie) is segmented into discrete events, how and when the hippocampus takes snapshots of these events, and how and when these snapshots are retrieved.

We are collaborating with the Niv Lab and the Computational Cognitive Neuroscience Lab led by Sam Gershman to develop new models that address how the brain infers the latent (hidden) causes of events, how these inferences about latent causes form a context that organizes stored episodic memories, and (reciprocally) how episodic memory is used to support event understanding. On the experimental side, we are collaborating with the Hasson Lab to develop new paradigms and analysis tools that allow us to decode the time series of event representations in the neocortex (and the role of the hippocampus in storing/retrieving these representations) as people encode and remember realistic narratives.

Applying Machine Learning To Neuroimaging

A fundamental challenge for cognitive neuroscience is how to extract information about a person’s cognitive state from brain imaging data. For many years, methods used in cognitive neuroscience lagged behind the state of the art in computer science, making it difficult to use brain imaging data to test fine-grained theories of neural information processing.

To remedy this gap, our lab was among the first to apply multivariate pattern analysis (MVPA) machine-learning methods to study memory. These MVPA methods make it possible to covertly track which memories are being retrieved on a second-by-second basis;

applying-machine-learning-to-neuroimaging-300x284

this neural readout, in turn, can be used to test theories of which memories will be retrieved, which brain regions are involved in memory retrieval, and how these retrieved memories shape behavior.

More recently, our lab has been participating in a new partnership between Princeton and Intel Labs, aimed at furthering the state of the art in fMRI cognitive-state decoding. As part of this partnership, we are developing several new methods, including real-time fMRI decoding methods, Bayesian methods for dimensionality reduction of neuroimaging data, methods for jointly inferring cognitive states from neural and behavioral data, new methods for studying brain connectivity, and new methods for improving the alignment of fMRI data across participants.

Other work

The lab’s interests encompass a wide range of additional topics at the intersection of computational modeling and memory. For example, in one line of work, we are collaborating with the Neuroscience of Cognitive Control Lab led by Jonathan Cohen to determine how people strategically use prefrontally-mediated working memory and hippocampally-mediated episodic memory in the service of prospective memory: remembering our future goals. In another line of work, we are collaborating with the Daw and Niv labs on studies exploring reciprocal interactions between memory and decision-making.