A sketch of the LENA project can be described by the so-called component separation problem that can be found in a large number of applications ranging from audio processing to remote sensing, biomedical signal processing or astrophysics. In a nutshell, the observed data are assumed to be a complex mixture of so-called called, which need to be estimated:
In this context, neither the components nor the way the are mixed together is known accurately, which makes this problem a highly challenging inverse problem. This problem relies on two major ingredients:
- An accurately modelling of the components is key for to unmix them. Up to know, sparse linear signal modelling has been shown as one of the most powerful approach. However, in a large number of applications (such as astrophysics), these approaches are highly limited: the component might live live on a low-dimensional manifold (e.g. polarized data, physical parameterized models, etc.) that current sparse linear models cannot precisely describe. A appropriate modeling requires switching from sparse linear to sparse non-linear models.
- A reliable and robust algorithmic framework to tackle generally non-convex optimization problems, especially when non-linear models are at play.
The LENA project is organized into three major components:
In this part of the project, we will focus on extending the current sparse models and related numerical methods to the non-linear world. This includes:
- extending the sparse linear representations (e.g. wavelets, etc) to model manifold-valued data.
- investigating new algorithms to account for the new sparse non-linear models to tackle inverse problems. These numerical methods will be based on the most recent advances in proximal algorithms.
Learning naturally arise in two different aspects in the LENA project:
- Sparse models via machine learning: we investigate the use of recent extensions of (potentially deep) representation learning to derive signals model and explore its connections with sparse dictionary learning. This will include deploying learnt sparse models to tackle inverse problems in imaging. This will provide new connections between machine learning and signal processing.
- Component separation as a learning problem: by nature, component separation share strong connections with machine learning. In this context, we investigate new sparse component separation models and related numerical methods.
The numerical models and methods we are developing in the LENA project are central in different applications, especially in the field of astrophysics. In the project, we will focus on three applications:
- A new look at the Planck data: the ability to use sparse non-linear physical models in addition to effective numerical algorithms will allow for a precise decomposition of the sky seen by Planck into its elementary constituents: CMB, SZ, galactic emissions, etc.
- With the current LoFAR project and the advent of the next generation large radio-telescopes such as SKA, fundamental signals such as the cosmological signal at the epoch or reionization (EoR signal) will be accessible. However, this requires designing highly effective component separation methods, which share strongly similarities with the ones we are developing for Planck.
- Euclid is the next European space telescope that will be able to investigating the distribution and nature of the so-called Dark Matter. This type of matter is not observed directly but via the weak gravitational lensing effect, which is measured by evaluating the shape of observed galaxies. However, these measures are highly tricky measure: they are tiny and highly sensitive to all sorts of instrumental effects and noise. In this context, the numerical tools developed in the LENA project are expected to significantly improve the estimation of the lensing effect.