How internal structure is learned, inferred, and made useful.
Representation and Inference is the topic program focused on embeddings, latent-variable models, self-supervision, approximate inference, and the technical logic of internal model spaces. It sits inside the Intelligence pillar as the program most directly concerned with learned structure and uncertainty-aware reasoning.
Contrastive objectives, self-supervision, multimodal embeddings, and geometry of learned spaces.
Variational methods, latent variables, amortization, and approximate Bayesian reasoning.
Generalization across tasks, modalities, and settings through reusable internal structure.
Self-Supervised Representation Learning for Human Physiological Data
Predictive, contrastive, masked, and multimodal objectives for physiological data.
Multimodal Biosignal Foundation Models
Cross-modal pretraining, missing-modality robustness, and latent-state learning for physiological systems.
Multimodal Diffusion in Latent Space
Shared versus factorized latent spaces, cross-modal conditioning, and diffusion-based generative modeling.