Person:
Ballester, Rafael

Loading...
Profile Picture
Email Address
Birth Date
Research Projects
Organizational Units
Job Title
First Name
Rafael
Last Name
Ballester
Affiliation
IE University
School
IE School of Science & Technology
Department
Applied Mathematics
Identifiers
Name

Search Results

Now showing 1 - 3 of 3
  • Publication
    Tensor Approximation for Multidimensional and Multivariate Data
    (Springer Science and Business Media Deutschland GmbH, 2021) Pajarola, Renato; Suter, Susanne; Yang, Haiyang; Ballester, Rafael; Seventh Framework Programme; Swiss National Science Foundation; https://ror.org/02jjdwm75
    Tensor decomposition methods and multilinear algebra are powerful tools to cope with challenges around multidimensional and multivariate data in computer graphics,image processing and data visualization,in particular with respect to compact representation and processing of increasingly large-scale data sets. Initially proposed as an extension of the concept of matrix rank for 3 and more dimensions,tensor decomposition methods have found applications in a remarkably wide range of disciplines. We briefly review the main concepts of tensor decompositions and their application to multidimensional visual data. Furthermore,we will include a first outlook on porting these techniques to multivariate data such as vector and tensor fields.
  • Publication
    Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data via Differentiable Cross-Approximation
    (Cornell University, 2021-11-15) Ballester, Rafael; Usvyatsov, Mikhail; Makarova, Anastasia; Rakhuba, Maxim; Krause, Andreas; Schindler, Konrad; https://ror.org/02jjdwm75
    We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking at a fraction of their entries only. Our method combines a neural network encoder with a tensor train decomposition to learn a low-rank latent encoding, coupled with cross-approximation (CA) to learn the representation through a subset of the original samples. CA is an adaptive sampling algorithm that is native to tensor decompositions and avoids working with the full high-resolution data explicitly. Instead, it actively selects local representative samples that we fetch out-of-core and on-demand. The required number of samples grows only logarithmically with the size of the input. Our implicit representation of the tensor in the network enables processing large grids that could not be otherwise tractable in their uncompressed form. The proposed approach is particularly useful for large-scale multidimensional grid data (e.g., 3D tomography), and for tasks that require context over a large receptive field (e.g., predicting the medical condition of entire organs).
  • Publication
    Tntorch: Tensor Network Learning with PyTorch
    (JMLR, 2022) Ballester, Rafael; Usvyatsov, Mikhail; Schindler, Konrad; https://ror.org/02jjdwm75
    We present tntorch, a tensor learning framework that supports multiple decompositions (including Candecomp/Parafac, Tucker, and Tensor Train) under a unified interface. With our library, the user can learn and handle low-rank tensors with automatic differentiation, seamless GPU support, and the convenience of PyTorch’s API. Besides decomposition algorithms, tntorch implements differentiable tensor algebra, rank truncation, crossapproximation, batch processing, comprehensive tensor arithmetics, and more.