Presented by: Dinh HO TONG MINH (INRAE, France)
Location: MC 3.2
Description
Imagine being able to monitor the Earth's surface with incredible precision, revealing even the tiniest changes daily. Say goodbye to weather-dependent imaging because the European Space Agency's SAR Copernicus Sentinel-1 program does it differently. Using cutting-edge radar technology, it captures snapshots of our planet actively, day or night, and even through thick clouds. It introduced Interferometric SAR, or InSAR, a revolutionary technique that changed the game in surface deformation monitoring. It's the go-to tool for understanding Earth's transformations. And now, you can harness its power with ease!
Discover the game-changing Persistent Scatterers and Distributed Scatterers (PSDS) InSAR and ComSAR algorithms, available as part of our open-source TomoSAR package (https://github.com/DinhHoTongMinh/TomoSAR). Don't be intimidated by the technical jargon; our tutorial is designed to make it accessible to everyone.
In this tutorial, we'll walk you through the incredible capabilities of PSDSInSAR and ComSAR techniques using real-world Sentinel-1 images. No coding skills are required! We'll show you how to utilize user-friendly open-source software like ISCE, SNAP, TomoSAR, and StaMPS to achieve groundbreaking insights into Earth's surface movements.
Starting with a brief overview of the theory, our tutorial will guide you through applying Sentinel-1 SAR data and processing technology to identify and monitor ground deformation. In just half a day of training, you'll gain a solid understanding of radar interferometry and be able to produce time series of ground motion from a stack of SAR images.
Tutorial Learning Objectives
After just a half-day of training, participants will gain the following skills:
Prerequisites
The software involved in this tutorial is open-source. For the InSAR processor, we will use the ISCE/SNAP to process Sentinel-1 SAR images. We then work on TomoSAR/StaMPS for time series processing. Google Earth will be used to visualize geospatial data. The tutorial is open to all who love to unlock the power of radar for earth surface monitoring.
Presented by: Michael Mommert (Stuttgart University of Applied Sciences, Germany), Joëlle Hanna (University of St. Galle, Switzerland), Linus Scheibenreif (University of St. Gallen, Switzerland), Damian Borth (University of St. Gallen, Switzerland)
Location: MC 3 Hall
Description
Deep Learning methods have proven highly successful across a wide range of Earth Observation (EO)-related downstream tasks, such as image classification, image-based regression and semantic segmentation. Supervised learning of such tasks typically requires large amounts of labeled data, which oftentimes are expensive to acquire, especially for EO data. Recent advances in Deep Learning provide the means to drastically reduce the amount of labeled data needed to train models with a given performance and to improve the general performance of these models on a range of downstream tasks. As part of this tutorial, we will introduce and showcase the use of three such approaches that strongly leverage the multi-modal nature of EO data: Data Fusion, Multi-task learning and Self-supervised Learning. The fusion of multi-modal data may improve the performance of a model by providing additional information; the same applies to multi-task learning, which supports the model in generating richer latent representations of the data by means of learning different tasks. Self-supervised learning enables the learning of rich latent representations based on large amounts of unlabeled data, which are ubiquitous in EO, thereby improving the general performance of the model and reducing the amount of labeled data necessary to successfully learn a downstream task. We will introduce the theoretical concepts behind these approaches and provide hands-on tutorials for the participants utilizing Jupyter Notebooks. Participants, who are required to have some basic knowledge in Deep Learning with Pytorch, will learn through realistic use cases how to apply these approaches in their own research for different data modalities (Sentinel-1, Sentinel-2, land-cover data, elevation data, seasonal data, weather data, etc.). Finally, the tutorial will provide the opportunity to discuss the participants’ use cases.
Tutorial Learning Objectives
Prerequisites
Participants are required to have basic knowledge in Deep Learning and experience with the Python programming language and the PyTorch framework for Deep Learning. Participation in the hands-on tutorials will require a Google account to access Google CoLab. Alternatively, participants can run the provided Jupyter Notebooks locally on their laptops.
Presented by: Charlotte Pelletier (Institute for Research in IT and Random Systems (IRISA), Vannes, France), Marc Rußwurm (Wageningen University, Netherlands), Dainius Masiliūnas (Wageningen University, Netherlands), Jan Verbesselt (Belgian Science Policy Office, Brussels)
Location: MC 3.4
Description
During the last decades, the number and characteristics of imaging sensors onboard satellites have constantly increased and evolved, allowing access (often free of charge) to a large amount of Earth Observation data. Recent satellites, such as Sentinel-2 or PlanetScope Doves, frequently revisit the same regions at weekly or even daily temporal resolutions. The data cubes acquired by these modern sensors, referred to as satellite image time series (SITS), make it possible to precisely monitor landscape dynamics continuously for various applications such as land monitoring, natural resource management, and climate change studies. In these applications, the method of transformation of sequences of satellite images into meaningful information relies on a precise analysis and understanding of the temporal dynamics present in SITS.
While a growing remote sensing research community studies these inherently multi-temporal aspects of our Planet, the underlying processing techniques need to be presented to a broader community. Concretely, we aim to reduce this gap in this half-day tutorial by presenting and reflecting on the breadth of methodologies and applications that require time-series data. After a brief introduction to time series, the first section will focus on time-series segmentation, whereas the second section will be devoted to deep learning techniques that exploit the temporal structure of the data. In practice, the theoretical concepts will be complemented with one hands-on practical session using Google Colab notebooks in R or Python/PyTorch.
The tentative planning below will be adapted to be consistent with the scheduled breaks.
Part 1 | Introduction to time-series analysis |
Part 2 | Time-series segmentation and break detection |
Morning coffee break | |
Part 3 | eep learning techniques for satellite image time series |
Part 4 | Practical session |
Part 5 | Closing remarks |
Tutorial Learning Objectives
This tutorial covers time-series analysis with a broad scope, including both traditional methods and advanced deep learning techniques, with a specific focus on Earth Observation. It aims to provide a theoretical basis for understanding various time-series concepts. The practical session will also allow the participants to apply the presented techniques with hands-on code in Google Colab notebooks.
The learning objectives are multiple. We expect a participant, after following this tutorial, will
Prerequisites
For practical sessions, we expect the participants to bring their laptops and own a (free) Google account to access Colab notebooks. Knowledge of R and Python programming will be helpful to follow the provided Colab notebooks, experience in deep learning is not required.
Presented by: Manil Maskey (NASA), Anca Anghelea (ESA), Shinichi Sobue (JAXA), Naoko Sugita (JAXA)
Location: MC 3.3
Description
In this tutorial, participants will learn how to access and work with EO mission data from NASA, ESA and JAXA to extract socially significant insights on environmental changes and how to use the open-source tools to seamlessly weave their findings into compelling narratives featured on EO Dashboard. They will learn from example stories developed by NASA, ESA and JAXA scientists and create their own.
Since 2020, NASA, ESA, and JAXA jointly developed a collaborative open-access platform – the Earth Observing Dashboard – consolidating data, computing power, and analytics services to communicate on environmental changes observed by tri-agency satellite mission data. The EO Dashboard, an open-source tool, enables users worldwide to explore, analyze, visualize and learn about EO-based indicators and scientific insights covering an evolving range of thematic domains: atmosphere, oceans, biomass, cryosphere, agriculture, economy, COVID-19. It also serves as an educational tool, providing an intuitive web-based application for users to discover diverse datasets, browse societal-relevant narratives supported by such data, and understand the scientific work behind them by means of reproducible Jupyter Notebooks.
Innovatively, the EO Dashboard is opening up for community contributed narratives on environmental changes and climate, supported by tri-agency data. To this end, users are provided with sponsored access to cloud platforms with managed coding environments (Jupyter Hub), EO data, and storytelling tools to conduct scientific analyses, extract insights and package their findings in compelling stories that could be featured on the EO Dashboard.
Tutorial Learning Objectives
The tutorial offers a hands-on experience with the EO Dashboard's features, guided by the EO Dashboard team. Participants engage in practical exercises using the Jupyter Environment (EOxHub), learning methods such as leveraging NASA’s eoAPI and JAXA’s web services, accessing and analyzing Copernicus Sentinel datasets, performing analytics on multiple datasets, extracting insights from EO Data and preparing dashboard-ready stories.
Prerequisites
The tutorial is self contained and participants will be guided and provided with the necessary tools and and support for the hands-on exercises. Participants are expected to have a basic level of knowledge in Earth Observation, EO image analysis, Statistics, Python.
Presented by: Carlos López-Martínez (Universitat Politècnica de Catalunya UPC, Spain) and Avik Bhattacharya (Indian Institute of Technology Bombay, India)
Location: Conference Hall 2
Description
Polarimetric Synthetic Aperture Radar (PolSAR) has emerged as a valuable enhancement to traditional single channel Synthetic Aperture Radar (SAR) applications. The inherent polarization characteristics in PolSAR data significantly contribute to its superiority over conventional SAR images, providing diverse information for various applications. PolSAR data utilization extends beyond quad-pol or full polarimetry to coherent dual-pol and compact-pol, offering a notable advantage by incorporating polarimetric information without compromising crucial image parameters like resolution, swath, and signal-to-noise ratio (SNR). In agriculture, PolSAR images are indispensable for precisely monitoring crop conditions, types, and health. Analyzing polarization responses aids farmers and agricultural experts in gaining valuable insights into crop growth stages, health, and detecting diseases or pest infestations.
Environmental monitoring and management benefit significantly from PolSAR data, crucial for mapping and assessing natural resources, accurate land cover classification, and distinguishing surfaces like forests, wetlands, and urban areas. PolSAR's applications extend into forestry, assisting in forest structure assessment, biomass estimation, and deforestation monitoring. Furthermore, PolSAR images play a crucial role in disaster management, evaluating areas impacted by floods, earthquakes, or landslides. The polarimetric information enables the differentiation of various terrain types and the identification of disaster-prone areas, facilitating timely and targeted response efforts. The versatility of PolSAR data continues to unveil new possibilities across multiple domains, enhancing our ability to observe and understand the Earth's dynamic processes.
Recognizing the paramount importance of PolSAR data, dedicated satellite sensors such as SAOCOM (CONAE), RCM (CSA&MDA), RISAT2 (ISRO), ALOS-2 (JAXA), BIOMASS & ROSE-L (ESA), and NISAR (NASA&ISRO) have been meticulously designed. The upcoming ESA BIOMASS mission is poised to mark a groundbreaking milestone as the first operational use of quad-pol-only data from space. Despite the myriad benefits offered by PolSAR over single-pol SAR, harnessing its information requires robust methodologies. Ongoing global research endeavors are focused on developing innovative retrieval algorithms for PolSAR, emphasizing the importance of a strong foundation in polarimetric theory to avoid misapplications that may yield erroneous results.
This tutorial aims to provide a comprehensive overview of physics and statistics and is dedicated to engaging students and professionals, providing vital knowledge to harness data from current and upcoming PolSAR missions for scientific research and societal applications.
Tutorial Learning Objectives
This tutorial aims at providing an overview of the potential of polarimetry and polarimetric SAR data in the different ways they are available to final users. It spans three main aspects of this kind of data: physical, mathematical, and statistical properties. The description starts with fully polarimetric SAR data and also devotes special attention to dual and compact formats. The tutorial discusses some of the most relevant applications of this data, enhancing the uniqueness of the information they provide. We include freely available data sources and discuss the primary abilities and limitations of free and open-source software. The tutorial concludes with a discussion of the future of SAR Polarimetry for remote sensing and Earth observation and the role of these data in artificial intelligence and machine learning.
Prerequisites
This lecture is intended for scientists, engineers and students engaged in the fields of Radar Remote Sensing and interested in Polarimetric SAR image analysis and applications. Some background in SAR processing techniques and microwave scattering would be an advantage, and familiarity with matrix algebra is required.
Presented by: Jinchang Ren (Robert Gordon University, Aberdeen, UK), Genyun Sun (China University of Petroleum (East China), Qingdao, China), Hang Fu (China University of Petroleum (East China), Ping Ma (Robert Gordon University, Aberdeen, UK) and Jaime Zabalza (University of Strathclyde, Glasgow, UK)
Location: Conference Hall 4
Description
As an emerging technique, hyperspectral image (HSI) has attracted increasing attention in remote sensing, offering uniquely high-resolution spectral data across two-dimensional images, unlocking powerful capabilities in various fields, including remote sensing and earth observation. Its broad spectral coverage, spanning from visible light to near-infrared wavelengths, enables the detection of subtle distinctions within scenes for inspection of land, ocean, cities and beyond. In addition, HSI is also at the forefront of emerging laboratory-based data analysis applications, including those related to food quality assessment, medical diagnostics, and the verification of counterfeit goods and documents. However, the utilization of hyperspectral data presents its own set of challenges. Firstly, due to factors such as atmospheric interference and sensor limitations, HSI data often contend with noise, which can compromise its ability to distinguish objects within a scene effectively. Additionally, HSI data exhibits high dimensionality, necessitating substantial computational resources for processing and analysis. Moreover, obtaining a sufficient quantity of accurate ground truth data in practical scenarios can pose a formidable obstacle. The Singular Spectrum Analysis (SSA), a recent and versatile tool for analysing time-series data, has proven to be an effective approach for denoising and feature extraction of the HSI data. This comprehensive tutorial aims to serve as an all-encompassing resource on SSA techniques and their myriad applications in the realm of HSI data analytics. We embark on this journey, commencing with the fundamental principles of SSA and progressing to cutting-edge SSA variants, including 1D SSA, 2D SSA, 3D SSA and the adaptive solutions and transform domain extensions as well as fast implementations, each tailored to address a diverse array of challenges, including dimensionality reduction, noise mitigation, feature extraction, object identification, data classification, and change detection within the hyperspectral remote sensing. Our overarching objective is to engage researchers and geoscientists actively involved in HSI remote sensing to learn the particular technique of SSA. We seek to empower them with advanced SSA knowledge and techniques necessary to tackle complex real-world problems. In particular, this tutorial will emphasize practical applications, offering insights into working with various types of hyperspectral datasets and tacking different remote sensing tasks.
Tutorial Learning Objectives
The learning objectives of this tutorial are to ensure that participants will leave with a deep understanding of SSA and its variants, and their practical applications in the context of hyperspectral remote sensing. Specifically, the objectives include:
Prerequisites
Basic concepts of remote sensing, hyperspectral image and image processing as well as certain experience in hyperspectral remote sensing applications, would be beneficial though not a must.
Presented by: Ronny Hänsch (German Aerospace Center (DLR)), Devis Tuia (Ecole Polytechnique Fédérale de Lausanne (EPFL Valais)), Claudio Persello (University of Twente)
Location: Conference Hall 1
Description
Following the success of Deep Learning in Remote Sensing during the last years, new topics arise that address remaining challenges that are of particular importance in Earth Observation (EO) applications including interpretability and uncertainty quantification, domain shifts, label scarcity, and integrating EO imagery with other modalities such as language. The aim of this tutorial is threefold: First, to provide insights and a deep understanding of the algorithmic principles behind state-of-the-art machine learning approaches. Second, to discuss modern approaches to tackle label scarcity, interpretability, and multi-modality. Third, to illustrate the benefits and limitations of machine learning with practical examples, in particular in the context of the sustainable development goals (SDGs).
Tutorial Learning Objectives
Prerequisites
Suitable for PhD students, research engineers, and scientists. Basic knowledge of machine learning is required.
Presented by: Gabriele Cavallaro (Forschungszentrum Jülich), Rocco Sedona (Forschungszentrum Jülich), Manil Maskey (NASA), Iksha Gurung (University of Alabama in Huntsville), Sujit Roy (NASA), Muthukumaran Ramasubramanian (NASA)
Location: Conference Hall 3
Description
Recent advancements in Remote Sensing (RS) technologies, marked by increased spectral, spatial, and temporal resolution, have led to a significant rise in data volumes and variety. This trend poses notable challenges in efficiently processing and analyzing data to support Earth Observation (EO) applications in operational scenarios. Concurrently, the evolution of Machine Learning (ML) and Deep Learning (DL) methodologies, particularly deep neural networks with extensive tunable parameters, has necessitated the development of parallel algorithms that ensure high scalability performance. As a result, data-intensive computing methodologies have become crucial in tackling the challenges in geoscience and RS domains. In recent years, we have witnessed a swift progression in High-Performance Computing (HPC) and cloud computing, impacting both hardware architectures and software development. Big tech companies have been focusing on AI supercomputers and cloud solutions, indicating a shift in computing technologies beyond traditional scientific computing, which was mainly driven by large governments. This era has seen a surge in transformers and self-supervised learning, leading to the development of Foundation Models (FMs). For example, Large Language Models (LLMs) have become foundational in natural language processing. These models have revolutionized how we interact with and analyze text-based data, demonstrating remarkable capabilities. FMs are also becoming popular among EO and RS researchers because of their potential, as they leverage self-supervised and multimodal learning to extract intricate patterns from diverse data sources, addressing the challenges of limited labeled EO data and the disparities between conventional computer vision problems and EO-based tasks. Harnessing HPC and cloud computing is necessary for these models, as they require extensive computational power for training, ensuring that the benefits of FMs are fully realized and accessible within the resource-intensive RS domain. FMs also allow for effective downstream use case adaptation using fine-tuning. FM fine-tuning reduces the amount of data and time required to achieve similar or higher levels of accuracy for a downstream use case compared to other ML and DL techniques.
Tutorial Learning Objectives
Hybrid approaches are necessary for optimal pre-training or fine-tuning of Foundation Models (FMs) using High-Performance Computing (HPC), while inference with new data is conducted in a cloud computing environment. This strategy reduces the costs associated with training and optimizing real-time inference. Furthermore, it facilitates the transition of a research FM from an HPC.
The initial segment of the tutorial will concentrate on theories addressing the most recent advancements in High-Performance Computing (HPC) systems and Cloud Computing services, along with the basics of Foundation Models (FMs). Attendees will acquire knowledge on how HPC and parallelization techniques facilitate the development and training of large-scale FMs, as well as their optimal fine-tuning for specific downstream applications. Additionally, they will explore how Machine Learning (ML) and FM models are deployed into Cloud infrastructure for widespread public use and consumption.
For the practical sections of the tutorial, participants will be provided with credentials to access the HPC systems at the Jülich Supercomputing Centre (Forschungszentrum Jülich, Germany) and AWS Cloud Computing resources. To save setup time during the tutorial, such as initializing environments from scratch and installing packages, the course organizers will prepare the necessary resources and tools in advance. This preparation allows participants to immediately start working on the exercises using pre-implemented algorithms and datasets. Additionally, attendees are welcome to bring their own applications and data for fine-tuning. The course will guide participants through the lifecycle of a Machine Learning (ML) project, focusing on fine-tuning a Foundation Model (FM) and optimizing it for an Earth Observation (EO) downstream use case. They will learn how to employ HPC distributed Deep Learning (DL) frameworks to accelerate training efficiently. The discussion will also cover the use of data standards and tools for loading geospatial data and enhancing training efficiency. Finally, participants will leverage cloud computing resources to develop a pipeline that deploys the model into a production environment and evaluates it using new and real-time data.
Preliminary Agenda
Morning Session | |
---|---|
Part 1 | Lecture 1: Introduction and Motivations |
Part 2 | Lecture 2: Levels of Parallelism, High Performance Computing, and Foundation Models |
Part 3 | Coffee Break |
Part 4 | Lecture 3: Fine-tune Foundation Model in High Performance Computing |
Afternoon session | |
Part 5 | Lecture 4: Foundation Model in Cloud for Live Inferencing |
Part 6 | Coffee Break |
Part 7 | More time for hands-on, Q&A and wrap-up |
Prerequisites
This tutorial is designed for individuals interested in learning about the integration of High-Performance Computing (HPC) and cloud computing for optimizing Foundation Models (FMs). While we recommend participants have a background in several key areas, such as machine learning, programming, and cloud computing, we welcome individuals who may only have experience in a few of these domains. The tutorial is structured to provide all the necessary information, ensuring that even those with limited experience in certain areas can fully engage and benefit from the comprehensive content offered.
Presented by: Anne Fouilloux (Simula Research Laboratory, Norway), Tina Odaka (IFREMER, France), Jean-Marc Delouis (CNRS, France), Alejandro Coca-Castro (The Alan Turing Institute, UK), Pier Lorenzo Marasco (Provare LTD, UK), Armagan Karatosun (ECMWF, Germany), Mohanad Albughdadi (ECMWF, Germany), Vasileios Baousis (ECMWF, UK)
Location: MC 3.4
Description
In this tutorial, participants will learn how to 1) navigate the Pangeo ecosystem for scalable Earth Science workflows and 2) exploit Earth Observation (EO) data, and in particular from Copernicus, with Artificial Intelligence (AI) using open and reproducible tools and methodologies from Horizon Europe EO4EU project, the Pangeo community, and other open source projects that leverage the Pangeo ecosystem. Participants will gain practical experience in leveraging AI techniques on Copernicus datasets through hands-on sessions. By the end of this tutorial, participants will possess the skills and knowledge needed to harness the power of AI for transformative EO applications using the Pangeo ML e.g. xbatcher and zen3geo and other advanced packages handling EO data based on the Pangeo stack for ML/AI, e.g. DeepSensor. Participants will also be introduced to some computer vision foundation models hosted on the EO4EU platform, learn how to prepare earth observation data, prompt these models to perform segmentation and object detection tasks and visualise the obtained results using visualisation and GIS tools.
By the end of this tutorial, participants will possess the skills and knowledge needed to harness the power of AI for transformative EO applications using the Pangeo ML ecosystem and EO4EU platform. All the training material will be collaboratively developed and made available online with CC-BY-4 licence. To facilitate user on-boarding the Pangeo@EOSC platform will be made available to participants. However, all the information needed to set up and run the training material on different platforms will be provided too. This tutorial will provide a comprehensive introduction along with hands-on examples to help you understand how these technologies can be used for Earth science data analysis and interpretation.
Tutorial Learning Objectives
By the end of this tutorial, learners will be able to:
Prerequisites
Presented by: Brianna Lind (NASA LP DAAC), Dana K. Chadwick (NASA JPL), David R Thompson (NASA JPL), Philip Brodrick (NASA JPL), Christiana Ade (NASA JPL), Erik Bolch (NASA LP DAAC)
Location: MC 3 Hall
Description
The Earth Surface Mineral Dust Source Investigation (EMIT) instrument aboard the International Space Station (ISS) measures visible to short-wave infrared (VSWIR) wavelengths and can be used to map Earth’s surface mineralogy in detail. Here we explore the science behind the EMIT mineralogy products and apply them in a repeatable scientific workflow. We will introduce imaging spectroscopy concepts and sensor specific considerations for exploring variation in surface mineralogy. Participants will learn the basics of VSWIR imaging spectroscopy, how minerals are identified and band depths are calculated, and how band depths are translated into mineral abundances. Participants will also learn how to find, access, and apply EMIT mineralogical data using open source resources.
Tutorial Learning Objectives
In this tutorial, we will explain some of the nuance regarding the spectral library and methods used for mineral identification, show how to orthorectify the data, explain how to interpret band depth, aggregate the targets identified by the classification into the EMIT 10 minerals related to surface dust, and translate band depth into spectral abundance. The EMIT Level 2B Estimated Mineral Identification and Band Depth and Uncertainty (EMITL2BMIN) Version 1 data product provides estimated mineral identification and band depths in a spatially raw, non-orthocorrected format. Mineral identification is performed on two spectral groups, which correspond to different regions of the spectra but often co-occur on the landscape. These estimates are generated using the Tetracorder system(code) and are based on EMITL2ARFL reflectance values. The EMIT_L2B_MINUNCERT file provides band depth uncertainty estimates calculated using surface Reflectance Uncertainty values from the EMITL2ARFL data product. The band depth uncertainties are presented as standard deviations. The fit score for each mineral identification is also provided as the coefficient of determination (r2) of the match between the continuum normalized library reference and the continuum normalized observed spectrum. Associated metadata indicates the name and reference information for each identified mineral, and additional information about aggregating minerals into different categories is available in the emit-sds-l2b repository and will be available as subsequent data products.
Prerequisites
The prerequisites for this tutorial include: a basic familiarity with remote sensing and python, an Earthdata Login account, a GitHub account. All participants need to bring their laptop on the day of event.
Presented by: Franz J. Meyer (Alaska Satellite Facility), Heidi Kristenson (Alaska Satellite Facility), Joseph H. Kennedy (Alaska Satellite Facility), Gregory Short (Alaska Satellite Facility), Alexander Handwerger (Jet Propulsion Laboratory)
Location: MC 3.2
Description
Managed by the Jet Propulsion Laboratory (JPL), the Observational Products for End-Users from Remote Sensing Analysis (OPERA; https://www.jpl.nasa.gov/go/opera) project
recently released two products derived from Sentinel-1 (S1) SAR that can accelerate your path to scientific discovery: Radiometric Terrain Corrected (RTC) and Coregistered Geocoded Single Look Complex (CSLC) products. Both are available through the NASA Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) for immediate use.
The RTC-S1 products provide terrain-corrected burst-based Sentinel-1 backscatter at 30-m pixel spacing. Delivered in GeoTIFF format, they are available for all S1 data acquired over land (excluding Antarctica) after October 2023. The OPERA CSLC-S1 products are generated over North America and U.S. Territories, going back to the start of the S1 mission. They are burst-based, fully geocoded SLCs, precisely aligned to a common grid to enable out-of-the-box InSAR processing.
This tutorial will first summarize the properties of these OPERA products including their data formats and burst-based definition. Attendees will be introduced to a range of data discovery and analysis tools that were developed by ASF to make working with OPERA data easy. Attendees will exercise how to discover and access OPERA RTC and CSLC data using ASF’s interactive discovery interface Vertex. We will also demonstrate programmatic access patterns using the open-source asf_search Python search module.
In addition to these traditional tools, attendees will be introduced to selected new distribution and analysis mechanisms available for OPERA: For RTC-S1 data, ASF publishes image services that allow users to interact with OPERA RTCs in web maps or a desktop GIS environment. Attendees will also use ASF services such as mosaicking and subsetting, available for OPERA data through dedicated Python resources. Finally, we will demonstrate OPERA time-series analysis workflows using ASF’s cloud-hosted OpenScienceLab JupyterHub platform.
Tutorial Learning Objectives
Prerequisites
To follow this tutorial, make sure you have signed up for an Earthdata Login at https://urs.earthdata.nasa.gov/. Attendees will need to bring a laptop able to connect to the internet in order to participate in the hands-on tutorials. Some basic command line and Python skills are helpful for a subset of the demonstrations. We will provide a list of resources referenced during the tutorial. No preparation prior to the course is required.
Presented by: Daniele Picone (Univ. Grenoble Alpes, CNRS, Grenoble INP, France) and Mauro Dalla Mura (Univ. Grenoble Alpes, CNRS, Grenoble INP and Institut Universitaire de France (IUF), France)
Location: MC 3.3
Description
The evolution of passive optical remote sensing imaging sensors has significantly expanded the ability to acquire high-resolution imagery for various applications, including environmental monitoring, agriculture, urban planning, and disaster management. Common acquisitions are multispectral and hyperspectral images, providing information on a scene in the visible and infrared domain with spectral channels ranging from a few to hundreds. However, the acquired images often suffer from distortions, noise, and other artifacts, requiring robust image restoration techniques to enhance the spatial and spectral quality of the data. Some examples of these problems include denoising, deblurring, inpainting, destriping, demosaicing, and super-resolution (e.g., pansharpening).
Image restoration in optical remote sensing is a challenging task constantly gathering attention from the community as shown by the large number of techniques that have been proposed in the literature.
Early methods addressed the image restoration problems empirically, by developing ad-hoc strategies to solve the problem at hand. Model-based techniques cast the image restoration as an inverse problem, where the desired product is obtained through Bayesian inference and approached as a variational problem. Recently, data-driven approaches based on deep learning have shown remarkable effectiveness for learning complex observation-reference relationships from large datasets, but often lack interpretability and the capability to generalize to different degradation problems. Hybrid approaches (e.g., plug&play and algorithm unrolling) have started to appear providing both interpretable and effective results.
This tutorial provides a comprehensive overview of image restoration problems in optical remote sensing with a detailed presentation of the main classes of techniques. Theoretical concepts are blended with hands-on practical example to put in practice the concepts described in the tutorial. Participants will gain insights into the challenges specific to optical imagery in remote sensing and learn how to leverage classical to advanced algorithms to address issues ranging from denoising to image sharpening.
Tutorial Learning Objectives
The main goals of this tutorial can be summarized as follows:
Prerequisites
Basic knowledge of remote sensing concepts and image processing fundamentals is recommended. Familiarity with Python is useful for the practical session. Working examples will be provided as Jupyter notebooks. The participants are invited to bring their laptop equipped with a recent Python installation. The Python libraries needed will be shared before the tutorial.
Presented by: Antonio Iodice (Università di Napoli Federico II, Italy) and Gerardo Di Martino (Università di Napoli Federico II, Italy)
Location: MC 3.2
Description
The estimation of wind speed and sea-state parameters is of fundamental importance in weather forecasting and environmental monitoring, as well in support of ship traffic monitoring as ancillary information. Microwave remote sensing instruments play a key role in the estimation of physical parameters from large-scale ocean observations. This is based on the availability of direct scattering models, able to describe the interaction between the incident electromagnetic field and the sea surface, thus providing meaningful relationships between the measured return and the physical parameters of interest.
The modeling of the electromagnetic return from the sea surface requires, as a first fundamental step, appropriate descriptions of the surface itself. Indeed, the complexity of the sea surface multi-scale behavior can be well characterized as a superposition of waves with different wavelengths, each generated according to different physical mechanisms (basically, wind forcing and combinations of gravity and water surface tension). It is convenient and customary modeling this surface as a random process, which can be conveniently described through different statistical parameters, such as root-mean-square (rms) height, rms slopes, and power spectral density (PSD). Indeed, several PSD models have been developed over the years, aimed at capturing the main multiscale features of the sea surface.
Once an appropriate description of the surface is available, one can move to the second step, i.e., the evaluation of the scattered field, which for the randomly rough sea surface can be obtained through approximate analytical solutions, under the Kirchoff approach (KA) (e.g., geometrical optics and physical optics approximations) and the Small Perturbation Method (SPM). However, these models present a limited validity range in terms of bistatic configurations and surface roughness, so that more advanced models have been developed and are frequently used, such as the two-scale model (TSM) or the Small Slope Approximation (SSA). However, standard TSM and SSA require the numerical evaluation of possibly multi-dimensional integrals. Fully analytical evolutions for TSM and SSA have been recently developed by the speakers of this tutorial.
This tutorial provides a concise, but complete, description of the abovementioned topics. In particular, the main sea surface statistical descriptors and PSD models are discussed. The main details of the models for the evaluation of electromagnetic scattering from rough surfaces, in general, and the sea surface, in particular, are provided, with specific care to the application scenario and validity limits of each model. The general bistatic configuration is considered, thus paving the way for applications to a wide set of microwave sensors, such as Synthetic Aperture Radar (SAR), altimeter, scatterometer, and Global Navigation Satellite Systems reflectometry (GNSS-R): the basic characteristics of each of these sensors will be also briefly discussed. The hands-on part will focus on the inversion of physical parameters: relevant Matlab code and sample data will be provided to the audience.
Tutorial Learning Objectives
After attending this tutorial, participants should have an understanding of:
Prerequisites
Suitable for PhD students, research engineers, and scientists. Basic knowledge of electromagnetics and of random variable and processes would be useful. In the "hands-on" part, Matlab codes will be provided to participants, so that it would be helpful for participants to bring their own laptop. There are no special requirements.
Presented by: James L. Garrison (Purdue University), Adriano Camps (Universitat Politechnica de Catalunya (UPC) and Estel Cardellach (Spanish National Research Council - ICE-CSIC, IEEC)
Location: MC 3.4
Description
Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), ie., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds) and soil moisture.
Extensive sets of airborne GNSS-R measurements have been collected over the past 20 years. Flight campaigns have included penetration of hurricanes with winds up to 60 m/s and flights over agricultural fields with calibrated soil moisture measurements. Fixed, tower-based GNSS-R experiments have been conducted to make measurements of sea state, sea level, soil moisture, ice and snow as well as inter-comparisons with microwave radiometry.
GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). HydroGNSS to be launched in 2024 will use dual-frequency and dual-polarized GNSS-R observations with principal science goals addressing land surface hydrology (soil moisture, inundation and the cryosphere). Availability of spaceborne GNSS-R data and the development of new applications from these measurements, is expected to increase significantly following launch of these new satellite missions and other smaller ones (ESA’s PRETTY and FFSCAT; China’s FY-3E; Taiwan’s FS-7R).
Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from VHF (137 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture have been demonstrated with SoOp.
This half-day tutorial will summarize the current state of the art in physical modeling, signal processing and application of GNSS-R and SoOp measurements from fixed, airborne and satellite-based platforms.
Tutorial Learning Objectives
After attending this tutorial, participants should have an understanding of:
Prerequisites
Basic concepts of linear systems and electrical signals. Some understanding of random variables would be useful.
Presented by: Mihai Datcu (POLITEHNICA București)
Location: MC 3.3
Description
Climate change models describe phenomena at scales of thousands of kilometers, for many decades. However, adaptation measures shall be applied at human activities scales, from 10m to 1km and from days to months. It is in the scope of the tutorial to promote the opportunities offered by the availability of Big EO Data, with a broad variety of sensing modalities, global coverage, and more than 40 years of observations in synergy with the new resources of AI and Quantum computing. The concept is in line with the “Destination Earth'' initiative (DestinE) that promotes the use of Digital Twins (DT) as actionable digital media, in support of adaptation measures. The DTs implement a virtual, dynamic models of the world, continuously updated, enabling simulations while providing more specific, localized and interactive information on climate change and how to deal with its impacts. These DTs are tools to largely interact with people raising awareness and amplify the use of existing climate data and knowledge services, for the elaboration of local and specific adaptation. That is a step towards a citizen driven approach with an increased societal focus. The tutorial introduces a concept of a federated interactive systems of DTs that provides for the first time an integrated view of how climate-change phenomena impact human activities and support adaption measures.
Tutorial Learning Objectives
The digital and sensing technologies, i.e. Big Data, are revolutionary developments massively impacting the Earth Observation (EO) domains. While, Artificial Intelligence (AI) is providing now the methods to valorize the Big Data. The presentation covers the major developments, of hybrid, physics aware AI paradigms, at the convergence of forward modelling, inverse problem and machine learning, to discover causalities and make prediction for maximization of the information extracted from EO and related non-EO data. The tutorial explains how to automatize the entire chain from multi-sensor EO and non-EO data, to physical parameters, required in applications by filling the gaps and generating a relevant, understandable layers of information. Today we are the edge of a Quantum revolution, impacting technologies in communication, computing, sensing, or metrology. Quantum computers and simulators are and continuously become largely accessible. Thus, definitely impacting the EO domains. In this context the tutorial puts the bases of information processing from the perspective of the quantum computing, algorithms and sensing. The presentation will cover an introduction in quantum information theory, quantum algorithms and computers, with the first results analyzing the main perspectives for EO applications.
The tutorial will cover the following main topics
Prerequisites
The tutorial is addressing MS, PhD students or scientists with background in EO and geosciences and elementary knowledge of ML/DNN methods.
Presented by: Ioannis Prapas (National Technical University of Athens & National Observatory of Athens), Spyros Kondylatos (National Technical University of Athens & National Observatory of Athens), Nikolaos-Ioannis Bountos (National Technical University of Athens & National Observatory of Athens), Maria Sdraka (National Technical University of Athens & National Observatory of Athens), and Ioannis Papoutsis (National Technical University of Athens & National Observatory of Athens)
Location: MC 3 Hall
Description
Deep Learning (DL) provides significant potential for advancing natural hazard management, yet its implementation poses certain challenges. First, the training of DL models requires the meticulous handling of big Earth Observation datasets. Second, the rare occurrence of natural hazards results in skewed distributions, limiting the availability of positive labeled examples and significantly hampering model training. Moreover, the demand for dependable, trustworthy, uncertainty-aware models for operational decision-making during such critical events further escalates the complexity of this endeavor. This tutorial aims to provide participants with practical tools and theoretical insights necessary to navigate and surmount these obstacles effectively. The primary objectives of this tutorial are:
All the sessions will be hands-on and accompanied by jupyter notebooks. The tutorial is organized as follows:
Tutorial Learning Objectives
Offer hands-on experience with Python Jupyter notebooks focused on disaster management. Practical knowledge on handling and accessing spatiotemporal datacubes. Offer clear guidelines for addressing key challenges in Deep Learning for Natural Hazards. Understanding of challenges in the use of DL for natural hazards management, including label & data scarcity, class imbalances, and noisy labels. Best practices and actionable tips for spatio-temporal forecasting using Earth Observation data. Demonstrate usage in real-life applications of DL, such as wildfire forecasting and flood mapping. Provide practical experience with advanced ML concepts, including self-supervised learning and Bayesian/uncertainty-aware models.Prerequisites
We assume basic knowledge of Python and Deep Learning. Ideally, participants will have previous experience with a Deep Learning framework (e.g. pytorch, tensorflow). The attendees are required to bring a laptop and a have a google colab account.