Arrive here. Then ask the gatekeeper for old lecture hall.

- Monday 2019-08-05: Figure out schedule, current and future challenges in calibrating and imaging data from radio interferometers.
- Tuesday 2019-08-06: Source modelling and priors.
- Wednesday 2019-08-07: Calibration challenges, cubical.
- Thursday 2019-08-08: Polarization priors.
- Friday 2019-08-09: Data compression, fast operators and closing event.

- Landman Bester (presentation)
- Fabian Kapfer
- Jakob Knollmüller
- Vanessa Moss
- Ancla Müller
- Reimar Leike (presentation)
- Rick Perley (presentation)
- Devon Powell
- Wasim Raja (presentation)
- Julian Rüstig
- Oleg Smirnov (presentation)
- Julia Stadler
- Ulrich A. Mbou Sob (presentation)
- Valentina Vacca (presentation)
- Philipp Arras
- Torsten Enßlin (presentation)

The invention of the MGVI technique in NIFTy could be used to convert almost any existing maximum likelihood algorithm based on a Newton scheme into a fully Bayesian algorithm. This is especially true if an explicit representation of the maximum likelihood Hessian has been implemented and if the parameter space is much smaller than the data space (so that the Hessian, or some sparse representation of it, fits into memory). In principle, besides the description of the prior, the only additional feature that is required is an implicit representation of the adjoint of the Jacobian operator. This makes for a powerful Bayesian hammer for which many nails already exist.

The next two steps necessary to turn RESOLVE into a practical tool for e.g. MeerKAT would be: adding a frequency axis and segmenting the image and running the imaging problem on independent islands a-la Landman's dimensionality reduction procedure.

You can compress data in different ways provided you have some prior knowledge about the underlying structure of Data. They are several schemes which are currently been used for visibilities averaging notably baseline dependent average (BDA). I think since we have enough information about the data measured by the different baselines, we can use techniques from Information Field Theory to find the optimal schemes. The only caveat here will whether or not we have enough computing power for such expensive optimisations.

A Bayesian approach to calibration and self-calibration can turn out to be useful not only to improve the scientific outcome of a single data set, but also to build a full model of the instrument. At each time new data are acquired, the previous model can be used as a prior and can be further refined during the data reduction process. This improves the knowledge of the instrument and therefore the overall quality of data reduction process as soon as new data are available.

My impression is that many issues which arise in current radio pipelines and calibration algorithms in particular suffer from being either maximum-likelihood or maximum-a-posteriori algorithms which exhibit the problem of overfitting. Thus, pushing forward the development of Bayesian calibration algorithms is probably a good idea.

Beam forming might be regarded as data compression. The sky brightness is the original data (stored in the past light cone of the telescope), and the instrument response to the sky represents the compressed data. Maybe this leads to insight how to form better beams and steer telescopes, maybe not.