Person:
Ballester, Rafael

Loading...
Profile Picture
Email Address
Birth Date
Research Projects
Organizational Units
Job Title
First Name
Rafael
Last Name
Ballester
Affiliation
IE University
School
IE School of Science & Technology
Department
Applied Mathematics
Identifiers
Name

Search Results

Now showing 1 - 2 of 2
  • Publication
    Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation
    (Cornell University, 2021-11-15) Ballester, Rafael; Usvyatsov, Mikhail; Makarova, Anastasia; Rakhuba, Maxim; Krause, Andreas; Schindler, Konrad; https://ror.org/02jjdwm75
    We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking at a fraction of their entries only. Our method combines a neural network encoder with a tensor train decomposition to learn a low-rank latent encoding, coupled with cross-approximation (CA) to learn the representation through a subset of the original samples. CA is an adaptive sampling algorithm that is native to tensor decompositions and avoids working with the full high-resolution data explicitly. Instead, it actively selects local representative samples that we fetch out-of-core and on-demand. The required number of samples grows only logarithmically with the size of the input. Our implicit representation of the tensor in the network enables processing large grids that could not be otherwise tractable in their uncompressed form. The proposed approach is particularly useful for large-scale multidimensional grid data (e.g., 3D tomography), and for tasks that require context over a large receptive field (e.g., predicting the medical condition of entire organs).
  • Publication
    Tntorch: Tensor Network Learning with PyTorch
    (JMLR, 2022) Ballester, Rafael; Usvyatsov, Mikhail; Schindler, Konrad; https://ror.org/02jjdwm75
    We present tntorch, a tensor learning framework that supports multiple decompositions (including Candecomp/Parafac, Tucker, and Tensor Train) under a unified interface. With our library, the user can learn and handle low-rank tensors with automatic differentiation, seamless GPU support, and the convenience of PyTorch’s API. Besides decomposition algorithms, tntorch implements differentiable tensor algebra, rank truncation, crossapproximation, batch processing, comprehensive tensor arithmetics, and more.