Researchers explore the generation of James Webb Space Telescope (JWSP) imagery via image-to-image translation from the available Hubble Space Telescope (HST) data.
1. Introduction
Researchers explore the problem of predicting the visible sky images captured by the James Webb Space Telescope (JWST), hereafter referred to as ‘Webb’
[1], using the available data from the Hubble Space Telescope (HST), hereinafter called ‘Hubble’
[2]. There is much interest in this type of problem in fields such as astrophysics, astronomy, and cosmology, encompassing a variety of data types and sources. This includes the translation of observations of galaxies in visible light
[3] and predictions of dark matter
[4]. The data registered from different sources may be acquired at different times, by different sensors, in different bands, with different resolutions, sensitivities, and levels of noise. The exact underlying mathematical model for transforming data between these sources is very complex and largely unknown.
Despite the great success of image-to-image translation in computer vision, its adoption in the astrophysics community has been limited, even though there is a lot of data available for such tasks that might enable sensor-to-sensor translation, conversion between different spectral bands, and adaptation among various satellite systems.
Before the launch of missions such as Euclid
[5], the radio telescope Square Kilometre Array
[6], and others, there has been a significant interest in advancing image-to-image translation techniques for astronomical data to: (i) enable efficient mission planning due to the high complexity and cost of exhaustive space exploration, allowing for the prioritization of specific space regions using existing data; and (ii) generate sufficient synthetic data for machine learning (ML) analysis as soon as the first real images from new imaging missions are available in adequate quantities.
2. Comparison between Webb and Hubble Telescopes
In
Figure 1 and
Figure 2, the same part of the sky captured by the Hubble and Webb telescopes is shown in the RGB format. The main differences between the Hubble and Webb telescopes are: (i)
Spatial resolution—The Webb telescope, featuring a 6.5-m primary mirror, offers superior resolution compared to Hubble’s 2.4-m mirror, which is particularly noticeable in infrared observations
[7]. This enables Webb to capture images of objects up to 100 times fainter than Hubble, as evident in the central spiral galaxy in
Figure 2.
(ii)
Wavelength coverage— Hubble, optimized for ultraviolet and visible light (0.1 to 2.5 microns), contrasts with Webb’s focus on infrared wavelengths (0.6 to 28.5 microns)
[8]. While this differentiation allows Webb to observe more distant and fainter celestial objects, including the earliest stars and galaxies, it is crucial to note that the IR emission captured by Webb differs inherently from the UV or visible light observed by Hubble. The distinction is not solely in the resolution or sensitivity between the Hubble Space Telescope (HST) and the James Webb Space Telescope (JWST) but also in the varying absorption of light by dust within different galaxy types. However, the proposed image-to-image translation method does not aim to delve into these observational differences. Instead, the focus is to explore whether image-to-image translation can effectively simulate Webb telescope imagery based on the existing data from Hubble. This approach seeks to leverage the available Hubble data to anticipate and interpret the observations that Webb might deliver, without directly analyzing the spectral and compositional differences between the images captured by the two telescopes.
Figure 1. Hubble photo of Galaxy Cluster SMACS 0723.
Figure 2. Webb image of Galaxy Cluster SMACS 0723.
(iii)
Light-collecting capacity—Webb’s substantially larger mirror provides over six times the light-collecting area compared to Hubble, essential for studying longer, dimmer wavelengths of light from distant, redshifted objects
[7]. This is exemplified in Webb’s images, which reveal smaller galaxies and structures not visible in Hubble’s observations, highlighted in yellow in
Figure 2.
3. Image-to-Image Translation
Image-to-image translation
[9] is the task of transforming an image from one domain to another, where the goal is to understand the mapping between an input image and an output image. Image-to-image translation methods have shown great success in computer vision tasks, including transferring different styles
[10], colorization
[11], superresolution
[12], visible to infrared translation
[13], and many others
[14]. There are two types of image-to-image translation methods:
unpaired [15] (sometimes called unsupervised) and
paired [16]. Unpaired setups do not require fixed pairs of corresponding images, while paired setups do.
4. Image-to-Image Translation in Astrophysics
Image-to-image translation has been used in astrophysics for galaxy simulation
[3], but these methods have mostly been used for denoising
[17] optical and radio astrophysical data
[18]. The task of predicting the images of one telescope from another using image-to-image translation remains largely under-researched.
This entry is adapted from the peer-reviewed paper 10.3390/s24041151