Tutorials

Tutorials Offered
Tutorial
HD-1: Unlock the Power of Earth Surface Monitoring with TomoSAR Persistent Scatterer Processing
Sun, 7 Jul, 09:00 - 12:00 Greece Time (UTC +2)
Tutorial
HD-2: Data-Efficient Deep Learning for Earth Observation
Tutorial
HD-4: Time Series Tutorial: Understanding Dynamics with Advanced Time-Series Processing Techniques
Tutorial
HD-12: Exploring Environmental Changes with EO: a hands-on journey from data analysis to scientific communication with ESA-NASA-JAXA EO Dashboard
Tutorial
FD-1: SAR Polarimetry: A Tour from Physics to Applications
Sun, 7 Jul, 10:30 - 19:00 Greece Time (UTC +2)
Tutorial
FD-2: Singular Spectrum Analysis: An Emerging Technique for Effective Feature Extraction and Denoising in Hyperspectral Image Remote Sensing
Tutorial
FD-3: Machine Learning in Remote Sensing - Theory and Applications for Earth Observation
Tutorial
FD-4: GRSS ESI/HDCRS Machine Learning Lifecycle in High Performance Computers and Cloud: A Focus on Geospatial Foundation Models
Tutorial
HD-8: Earthly marvels revealed: Pangeo, AI, and Copernicus in action
Sun, 7 Jul, 12:30 - 15:30 Greece Time (UTC +2)
Tutorial
HD-9: Mapping minerals with space-based imaging spectroscopy
Tutorial
HD-10: A Day at the OPERA: Discover how Analysis-Ready OPERA Data can Accelerate your Science
Tutorial
HD-11: Optical remote sensing image restoration
Tutorial
HD-3: Electromagnetic scattering from the sea surface: basic theory and applications
Sun, 7 Jul, 16:00 - 19:00 Greece Time (UTC +2)
Tutorial
HD-5: Remote Sensing with Reflected Global Navigation Satellite System (GNSS-R) and other Signals of Opportunity (SoOp)
Tutorial
HD-6: Physics Guided and Quantum Artificial Intelligence for Earth Observation: towards Digital Twin Earth for Climate Change Adaptation
Tutorial
HD-7: A Practical Session on Deep Learning Advances for Monitoring and forecasting Natural Hazards

Sun, 7 Jul, 09:00 - 12:00 Greece Time (UTC +2)
Tutorial

Presented by: Dinh HO TONG MINH (INRAE, France)

Location: MC 3.2

Description

Imagine being able to monitor the Earth's surface with incredible precision, revealing even the tiniest changes daily. Say goodbye to weather-dependent imaging because the European Space Agency's SAR Copernicus Sentinel-1 program does it differently. Using cutting-edge radar technology, it captures snapshots of our planet actively, day or night, and even through thick clouds. It introduced Interferometric SAR, or InSAR, a revolutionary technique that changed the game in surface deformation monitoring. It's the go-to tool for understanding Earth's transformations. And now, you can harness its power with ease!

Discover the game-changing Persistent Scatterers and Distributed Scatterers (PSDS) InSAR and ComSAR algorithms, available as part of our open-source TomoSAR package (https://github.com/DinhHoTongMinh/TomoSAR). Don't be intimidated by the technical jargon; our tutorial is designed to make it accessible to everyone.

In this tutorial, we'll walk you through the incredible capabilities of PSDSInSAR and ComSAR techniques using real-world Sentinel-1 images. No coding skills are required! We'll show you how to utilize user-friendly open-source software like ISCE, SNAP, TomoSAR, and StaMPS to achieve groundbreaking insights into Earth's surface movements.

Starting with a brief overview of the theory, our tutorial will guide you through applying Sentinel-1 SAR data and processing technology to identify and monitor ground deformation. In just half a day of training, you'll gain a solid understanding of radar interferometry and be able to produce time series of ground motion from a stack of SAR images.

Tutorial Learning Objectives

After just a half-day of training, participants will gain the following skills:

  1. Access SAR Data: You'll be able to effortlessly access SAR data, making it readily available for your analysis.
  2. Master InSAR Theory: Our expert guidance will help you grasp the intricacies of InSAR processing, breaking down the complex concepts into easily digestible knowledge.
  3. Interferogram Creation: You'll learn how to create interferograms, a key step in the process that reveals invaluable insights into the Earth's surface.
  4. Ground Motion Interpretation: With our guidance, you'll be able to interpret the ground motions unveiled by these interferograms, giving you the power to understand and analyze Earth surface changes.
  5. Time Series Extraction: We'll demystify the process of extracting ground motion time series from a stack of SAR images, empowering you to track and monitor surface movements over time

Prerequisites

The software involved in this tutorial is open-source. For the InSAR processor, we will use the ISCE/SNAP to process Sentinel-1 SAR images. We then work on TomoSAR/StaMPS for time series processing. Google Earth will be used to visualize geospatial data. The tutorial is open to all who love to unlock the power of radar for earth surface monitoring.

Sun, 7 Jul, 09:00 - 12:00 Greece Time (UTC +2)
Tutorial

Presented by: Michael Mommert (Stuttgart University of Applied Sciences, Germany), Joëlle Hanna (University of St. Galle, Switzerland), Linus Scheibenreif (University of St. Gallen, Switzerland), Damian Borth (University of St. Gallen, Switzerland)

Location: MC 3 Hall

Description

Deep Learning methods have proven highly successful across a wide range of Earth Observation (EO)-related downstream tasks, such as image classification, image-based regression and semantic segmentation. Supervised learning of such tasks typically requires large amounts of labeled data, which oftentimes are expensive to acquire, especially for EO data. Recent advances in Deep Learning provide the means to drastically reduce the amount of labeled data needed to train models with a given performance and to improve the general performance of these models on a range of downstream tasks. As part of this tutorial, we will introduce and showcase the use of three such approaches that strongly leverage the multi-modal nature of EO data: Data Fusion, Multi-task learning and Self-supervised Learning. The fusion of multi-modal data may improve the performance of a model by providing additional information; the same applies to multi-task learning, which supports the model in generating richer latent representations of the data by means of learning different tasks. Self-supervised learning enables the learning of rich latent representations based on large amounts of unlabeled data, which are ubiquitous in EO, thereby improving the general performance of the model and reducing the amount of labeled data necessary to successfully learn a downstream task. We will introduce the theoretical concepts behind these approaches and provide hands-on tutorials for the participants utilizing Jupyter Notebooks. Participants, who are required to have some basic knowledge in Deep Learning with Pytorch, will learn through realistic use cases how to apply these approaches in their own research for different data modalities (Sentinel-1, Sentinel-2, land-cover data, elevation data, seasonal data, weather data, etc.). Finally, the tutorial will provide the opportunity to discuss the participants’ use cases.

Tutorial Learning Objectives

  • apply different data fusion techniques to multi-modal data (hands-on)
  • learn about multi-task learning and suitable downstream tasks
  • apply multi-task learning to multi-modal data
  • learn the principles of pretext task-based and contrastive self-supervised learning for multi-modal EO data
  • perform self-supervised pre-training on multi-modal data (hands-on, on a reduced dataset)
  • use pre-trained models for different EO-related downstream tasks (hands-on

Prerequisites

Participants are required to have basic knowledge in Deep Learning and experience with the Python programming language and the PyTorch framework for Deep Learning. Participation in the hands-on tutorials will require a Google account to access Google CoLab. Alternatively, participants can run the provided Jupyter Notebooks locally on their laptops.

Sun, 7 Jul, 09:00 - 12:00 Greece Time (UTC +2)
Tutorial

Presented by: Charlotte Pelletier (Institute for Research in IT and Random Systems (IRISA), Vannes, France), Marc Rußwurm (Wageningen University, Netherlands), Dainius Masiliūnas (Wageningen University, Netherlands), Jan Verbesselt (Belgian Science Policy Office, Brussels)

Location: MC 3.4

Description

During the last decades, the number and characteristics of imaging sensors onboard satellites have constantly increased and evolved, allowing access (often free of charge) to a large amount of Earth Observation data. Recent satellites, such as Sentinel-2 or PlanetScope Doves, frequently revisit the same regions at weekly or even daily temporal resolutions. The data cubes acquired by these modern sensors, referred to as satellite image time series (SITS), make it possible to precisely monitor landscape dynamics continuously for various applications such as land monitoring, natural resource management, and climate change studies. In these applications, the method of transformation of sequences of satellite images into meaningful information relies on a precise analysis and understanding of the temporal dynamics present in SITS.

While a growing remote sensing research community studies these inherently multi-temporal aspects of our Planet, the underlying processing techniques need to be presented to a broader community. Concretely, we aim to reduce this gap in this half-day tutorial by presenting and reflecting on the breadth of methodologies and applications that require time-series data. After a brief introduction to time series, the first section will focus on time-series segmentation, whereas the second section will be devoted to deep learning techniques that exploit the temporal structure of the data. In practice, the theoretical concepts will be complemented with one hands-on practical session using Google Colab notebooks in R or Python/PyTorch.

The tentative planning below will be adapted to be consistent with the scheduled breaks.

Part 1Introduction to time-series analysis
Part 2Time-series segmentation and break detection
Morning coffee break
Part 3eep learning techniques for satellite image time series
Part 4Practical session
Part 5Closing remarks

Tutorial Learning Objectives

This tutorial covers time-series analysis with a broad scope, including both traditional methods and advanced deep learning techniques, with a specific focus on Earth Observation. It aims to provide a theoretical basis for understanding various time-series concepts. The practical session will also allow the participants to apply the presented techniques with hands-on code in Google Colab notebooks.

The learning objectives are multiple. We expect a participant, after following this tutorial, will

  • know the general basics of time-series analysis,
  • apply BFAST algorithms for detecting breaks in SITS
  • understand the intrinsic mechanisms within temporal neural networks,
  • be able to use and train deep-learning architectures in PyTorch.

Prerequisites

For practical sessions, we expect the participants to bring their laptops and own a (free) Google account to access Colab notebooks. Knowledge of R and Python programming will be helpful to follow the provided Colab notebooks, experience in deep learning is not required.

Sun, 7 Jul, 09:00 - 12:00 Greece Time (UTC +2)
Tutorial

Presented by: Manil Maskey (NASA), Anca Anghelea (ESA), Shinichi Sobue (JAXA), Naoko Sugita (JAXA)

Location: MC 3.3

Description

In this tutorial, participants will learn how to access and work with EO mission data from NASA, ESA and JAXA to extract socially significant insights on environmental changes and how to use the open-source tools to seamlessly weave their findings into compelling narratives featured on EO Dashboard. They will learn from example stories developed by NASA, ESA and JAXA scientists and create their own.

Since 2020, NASA, ESA, and JAXA jointly developed a collaborative open-access platform – the Earth Observing Dashboard – consolidating data, computing power, and analytics services to communicate on environmental changes observed by tri-agency satellite mission data. The EO Dashboard, an open-source tool, enables users worldwide to explore, analyze, visualize and learn about EO-based indicators and scientific insights covering an evolving range of thematic domains: atmosphere, oceans, biomass, cryosphere, agriculture, economy, COVID-19. It also serves as an educational tool, providing an intuitive web-based application for users to discover diverse datasets, browse societal-relevant narratives supported by such data, and understand the scientific work behind them by means of reproducible Jupyter Notebooks.

Innovatively, the EO Dashboard is opening up for community contributed narratives on environmental changes and climate, supported by tri-agency data. To this end, users are provided with sponsored access to cloud platforms with managed coding environments (Jupyter Hub), EO data, and storytelling tools to conduct scientific analyses, extract insights and package their findings in compelling stories that could be featured on the EO Dashboard.

Tutorial Learning Objectives

The tutorial offers a hands-on experience with the EO Dashboard's features, guided by the EO Dashboard team. Participants engage in practical exercises using the Jupyter Environment (EOxHub), learning methods such as leveraging NASA’s eoAPI and JAXA’s web services, accessing and analyzing Copernicus Sentinel datasets, performing analytics on multiple datasets, extracting insights from EO Data and preparing dashboard-ready stories.

Prerequisites

The tutorial is self contained and participants will be guided and provided with the necessary tools and and support for the hands-on exercises. Participants are expected to have a basic level of knowledge in Earth Observation, EO image analysis, Statistics, Python.

Sun, 7 Jul, 10:30 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: Carlos López-Martínez (Universitat Politècnica de Catalunya UPC, Spain) and Avik Bhattacharya (Indian Institute of Technology Bombay, India)

Location: Conference Hall 2

Description

Polarimetric Synthetic Aperture Radar (PolSAR) has emerged as a valuable enhancement to traditional single channel Synthetic Aperture Radar (SAR) applications. The inherent polarization characteristics in PolSAR data significantly contribute to its superiority over conventional SAR images, providing diverse information for various applications. PolSAR data utilization extends beyond quad-pol or full polarimetry to coherent dual-pol and compact-pol, offering a notable advantage by incorporating polarimetric information without compromising crucial image parameters like resolution, swath, and signal-to-noise ratio (SNR). In agriculture, PolSAR images are indispensable for precisely monitoring crop conditions, types, and health. Analyzing polarization responses aids farmers and agricultural experts in gaining valuable insights into crop growth stages, health, and detecting diseases or pest infestations.

Environmental monitoring and management benefit significantly from PolSAR data, crucial for mapping and assessing natural resources, accurate land cover classification, and distinguishing surfaces like forests, wetlands, and urban areas. PolSAR's applications extend into forestry, assisting in forest structure assessment, biomass estimation, and deforestation monitoring. Furthermore, PolSAR images play a crucial role in disaster management, evaluating areas impacted by floods, earthquakes, or landslides. The polarimetric information enables the differentiation of various terrain types and the identification of disaster-prone areas, facilitating timely and targeted response efforts. The versatility of PolSAR data continues to unveil new possibilities across multiple domains, enhancing our ability to observe and understand the Earth's dynamic processes.

Recognizing the paramount importance of PolSAR data, dedicated satellite sensors such as SAOCOM (CONAE), RCM (CSA&MDA), RISAT2 (ISRO), ALOS-2 (JAXA), BIOMASS & ROSE-L (ESA), and NISAR (NASA&ISRO) have been meticulously designed. The upcoming ESA BIOMASS mission is poised to mark a groundbreaking milestone as the first operational use of quad-pol-only data from space. Despite the myriad benefits offered by PolSAR over single-pol SAR, harnessing its information requires robust methodologies. Ongoing global research endeavors are focused on developing innovative retrieval algorithms for PolSAR, emphasizing the importance of a strong foundation in polarimetric theory to avoid misapplications that may yield erroneous results.

This tutorial aims to provide a comprehensive overview of physics and statistics and is dedicated to engaging students and professionals, providing vital knowledge to harness data from current and upcoming PolSAR missions for scientific research and societal applications.

Tutorial Learning Objectives

This tutorial aims at providing an overview of the potential of polarimetry and polarimetric SAR data in the different ways they are available to final users. It spans three main aspects of this kind of data: physical, mathematical, and statistical properties. The description starts with fully polarimetric SAR data and also devotes special attention to dual and compact formats. The tutorial discusses some of the most relevant applications of this data, enhancing the uniqueness of the information they provide. We include freely available data sources and discuss the primary abilities and limitations of free and open-source software. The tutorial concludes with a discussion of the future of SAR Polarimetry for remote sensing and Earth observation and the role of these data in artificial intelligence and machine learning.

Prerequisites

This lecture is intended for scientists, engineers and students engaged in the fields of Radar Remote Sensing and interested in Polarimetric SAR image analysis and applications. Some background in SAR processing techniques and microwave scattering would be an advantage, and familiarity with matrix algebra is required.

Sun, 7 Jul, 10:30 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: Jinchang Ren (Robert Gordon University, Aberdeen, UK), Genyun Sun (China University of Petroleum (East China), Qingdao, China), Hang Fu (China University of Petroleum (East China), Ping Ma (Robert Gordon University, Aberdeen, UK) and Jaime Zabalza (University of Strathclyde, Glasgow, UK)

Location: Conference Hall 4

Description

As an emerging technique, hyperspectral image (HSI) has attracted increasing attention in remote sensing, offering uniquely high-resolution spectral data across two-dimensional images, unlocking powerful capabilities in various fields, including remote sensing and earth observation. Its broad spectral coverage, spanning from visible light to near-infrared wavelengths, enables the detection of subtle distinctions within scenes for inspection of land, ocean, cities and beyond. In addition, HSI is also at the forefront of emerging laboratory-based data analysis applications, including those related to food quality assessment, medical diagnostics, and the verification of counterfeit goods and documents. However, the utilization of hyperspectral data presents its own set of challenges. Firstly, due to factors such as atmospheric interference and sensor limitations, HSI data often contend with noise, which can compromise its ability to distinguish objects within a scene effectively. Additionally, HSI data exhibits high dimensionality, necessitating substantial computational resources for processing and analysis. Moreover, obtaining a sufficient quantity of accurate ground truth data in practical scenarios can pose a formidable obstacle. The Singular Spectrum Analysis (SSA), a recent and versatile tool for analysing time-series data, has proven to be an effective approach for denoising and feature extraction of the HSI data. This comprehensive tutorial aims to serve as an all-encompassing resource on SSA techniques and their myriad applications in the realm of HSI data analytics. We embark on this journey, commencing with the fundamental principles of SSA and progressing to cutting-edge SSA variants, including 1D SSA, 2D SSA, 3D SSA and the adaptive solutions and transform domain extensions as well as fast implementations, each tailored to address a diverse array of challenges, including dimensionality reduction, noise mitigation, feature extraction, object identification, data classification, and change detection within the hyperspectral remote sensing. Our overarching objective is to engage researchers and geoscientists actively involved in HSI remote sensing to learn the particular technique of SSA. We seek to empower them with advanced SSA knowledge and techniques necessary to tackle complex real-world problems. In particular, this tutorial will emphasize practical applications, offering insights into working with various types of hyperspectral datasets and tacking different remote sensing tasks.

Tutorial Learning Objectives

The learning objectives of this tutorial are to ensure that participants will leave with a deep understanding of SSA and its variants, and their practical applications in the context of hyperspectral remote sensing. Specifically, the objectives include:

  • Understand the fundamental principles of SSA and its relevance in analysing signals and hyperspectral data.
  • Gain the understanding of SSA variants in HSI.
  • Examine case studies showcasing successful applications of SSA variants and learn best practices for implementing these techniques in research and analysis.
  • Gain insights into emerging trends and challenges in the field of HSI and how SSA variants are evolving to address them.
  • Engage in hands-on exercises and demonstrations using real hyperspectral datasets to apply SSA variants to real-world remote sensing scenarios (optional, depending on the settings).

Prerequisites

Basic concepts of remote sensing, hyperspectral image and image processing as well as certain experience in hyperspectral remote sensing applications, would be beneficial though not a must.

Sun, 7 Jul, 10:30 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: Ronny Hänsch (German Aerospace Center (DLR)), Devis Tuia (Ecole Polytechnique Fédérale de Lausanne (EPFL Valais)), Claudio Persello (University of Twente)

Location: Conference Hall 1

Description

Following the success of Deep Learning in Remote Sensing during the last years, new topics arise that address remaining challenges that are of particular importance in Earth Observation (EO) applications including interpretability and uncertainty quantification, domain shifts, label scarcity, and integrating EO imagery with other modalities such as language. The aim of this tutorial is threefold: First, to provide insights and a deep understanding of the algorithmic principles behind state-of-the-art machine learning approaches. Second, to discuss modern approaches to tackle label scarcity, interpretability, and multi-modality. Third, to illustrate the benefits and limitations of machine learning with practical examples, in particular in the context of the sustainable development goals (SDGs).

Tutorial Learning Objectives

  • to introduce sophisticated machine learning methods
  • to show applications of ML/DL methods in large-scale scenarios
  • to inform about common mistakes and how to avoid them
  • to provide recommendations for increased performance

Prerequisites

Suitable for PhD students, research engineers, and scientists. Basic knowledge of machine learning is required.

Sun, 7 Jul, 10:30 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: Gabriele Cavallaro (Forschungszentrum Jülich), Rocco Sedona (Forschungszentrum Jülich), Manil Maskey (NASA), Iksha Gurung (University of Alabama in Huntsville), Sujit Roy (NASA), Muthukumaran Ramasubramanian (NASA)

Location: Conference Hall 3

Description

Recent advancements in Remote Sensing (RS) technologies, marked by increased spectral, spatial, and temporal resolution, have led to a significant rise in data volumes and variety. This trend poses notable challenges in efficiently processing and analyzing data to support Earth Observation (EO) applications in operational scenarios. Concurrently, the evolution of Machine Learning (ML) and Deep Learning (DL) methodologies, particularly deep neural networks with extensive tunable parameters, has necessitated the development of parallel algorithms that ensure high scalability performance. As a result, data-intensive computing methodologies have become crucial in tackling the challenges in geoscience and RS domains. In recent years, we have witnessed a swift progression in High-Performance Computing (HPC) and cloud computing, impacting both hardware architectures and software development. Big tech companies have been focusing on AI supercomputers and cloud solutions, indicating a shift in computing technologies beyond traditional scientific computing, which was mainly driven by large governments. This era has seen a surge in transformers and self-supervised learning, leading to the development of Foundation Models (FMs). For example, Large Language Models (LLMs) have become foundational in natural language processing. These models have revolutionized how we interact with and analyze text-based data, demonstrating remarkable capabilities. FMs are also becoming popular among EO and RS researchers because of their potential, as they leverage self-supervised and multimodal learning to extract intricate patterns from diverse data sources, addressing the challenges of limited labeled EO data and the disparities between conventional computer vision problems and EO-based tasks. Harnessing HPC and cloud computing is necessary for these models, as they require extensive computational power for training, ensuring that the benefits of FMs are fully realized and accessible within the resource-intensive RS domain. FMs also allow for effective downstream use case adaptation using fine-tuning. FM fine-tuning reduces the amount of data and time required to achieve similar or higher levels of accuracy for a downstream use case compared to other ML and DL techniques.

Tutorial Learning Objectives

Hybrid approaches are necessary for optimal pre-training or fine-tuning of Foundation Models (FMs) using High-Performance Computing (HPC), while inference with new data is conducted in a cloud computing environment. This strategy reduces the costs associated with training and optimizing real-time inference. Furthermore, it facilitates the transition of a research FM from an HPC.

The initial segment of the tutorial will concentrate on theories addressing the most recent advancements in High-Performance Computing (HPC) systems and Cloud Computing services, along with the basics of Foundation Models (FMs). Attendees will acquire knowledge on how HPC and parallelization techniques facilitate the development and training of large-scale FMs, as well as their optimal fine-tuning for specific downstream applications. Additionally, they will explore how Machine Learning (ML) and FM models are deployed into Cloud infrastructure for widespread public use and consumption.

For the practical sections of the tutorial, participants will be provided with credentials to access the HPC systems at the Jülich Supercomputing Centre (Forschungszentrum Jülich, Germany) and AWS Cloud Computing resources. To save setup time during the tutorial, such as initializing environments from scratch and installing packages, the course organizers will prepare the necessary resources and tools in advance. This preparation allows participants to immediately start working on the exercises using pre-implemented algorithms and datasets. Additionally, attendees are welcome to bring their own applications and data for fine-tuning. The course will guide participants through the lifecycle of a Machine Learning (ML) project, focusing on fine-tuning a Foundation Model (FM) and optimizing it for an Earth Observation (EO) downstream use case. They will learn how to employ HPC distributed Deep Learning (DL) frameworks to accelerate training efficiently. The discussion will also cover the use of data standards and tools for loading geospatial data and enhancing training efficiency. Finally, participants will leverage cloud computing resources to develop a pipeline that deploys the model into a production environment and evaluates it using new and real-time data.

Preliminary Agenda

Morning Session
Part 1Lecture 1: Introduction and Motivations
Part 2Lecture 2: Levels of Parallelism, High Performance Computing, and Foundation Models
Part 3Coffee Break
Part 4Lecture 3: Fine-tune Foundation Model in High Performance Computing
Afternoon session
Part 5Lecture 4: Foundation Model in Cloud for Live Inferencing
Part 6Coffee Break
Part 7More time for hands-on, Q&A and wrap-up

Prerequisites

This tutorial is designed for individuals interested in learning about the integration of High-Performance Computing (HPC) and cloud computing for optimizing Foundation Models (FMs). While we recommend participants have a background in several key areas, such as machine learning, programming, and cloud computing, we welcome individuals who may only have experience in a few of these domains. The tutorial is structured to provide all the necessary information, ensuring that even those with limited experience in certain areas can fully engage and benefit from the comprehensive content offered.

  • Basic Understanding of Machine Learning and Deep Learning: Familiarity with the fundamental concepts of ML and DL, including neural networks, training algorithms, and loss functions.
  • Foundation Models Knowledge: An introductory level understanding of Foundation Models, their purposes, and applications.
  • Programming Skills: Proficiency in programming, preferably in Python, as it is commonly used for ML and DL projects.
  • Familiarity with High-Performance Computing (HPC): A basic grasp of HPC concepts, including parallel computing and distributed systems.
  • Experience with Cloud Computing Platforms: Basic knowledge of cloud services, particularly AWS, and how to utilize cloud resources for computing and storage.
  • Understanding of Data Standards for ML: Awareness of data standards, especially for geospatial data, if applicable to the participant's interests.
  • Preparation to Engage in Hands-on Exercises: Willingness to work directly with pre-implemented algorithms and datasets, and optionally, to bring personal applications and data for fine-tuning.
  • Interest in ML Project Lifecycle: An eagerness to learn about the entire lifecycle of an ML project, from development and training to deployment and real-time data evaluation.
  • Familiarity with Git/GitHub: A basic understanding of version control using Git, including how to clone repositories, commit changes, and navigate GitHub for code sharing and collaboration
  • Each participant is required to provide their own laptop, which should be equipped with either Windows, Mac, or Linux operating systems.
Sun, 7 Jul, 12:30 - 15:30 Greece Time (UTC +2)
Tutorial

Presented by: Anne Fouilloux (Simula Research Laboratory, Norway), Tina Odaka (IFREMER, France), Jean-Marc Delouis (CNRS, France), Alejandro Coca-Castro (The Alan Turing Institute, UK), Pier Lorenzo Marasco (Provare LTD, UK), Armagan Karatosun (ECMWF, Germany), Mohanad Albughdadi (ECMWF, Germany), Vasileios Baousis (ECMWF, UK)

Location: MC 3.4

Description

In this tutorial, participants will learn how to 1) navigate the Pangeo ecosystem for scalable Earth Science workflows and 2) exploit Earth Observation (EO) data, and in particular from Copernicus, with Artificial Intelligence (AI) using open and reproducible tools and methodologies from Horizon Europe EO4EU project, the Pangeo community, and other open source projects that leverage the Pangeo ecosystem. Participants will gain practical experience in leveraging AI techniques on Copernicus datasets through hands-on sessions. By the end of this tutorial, participants will possess the skills and knowledge needed to harness the power of AI for transformative EO applications using the Pangeo ML e.g. xbatcher and zen3geo and other advanced packages handling EO data based on the Pangeo stack for ML/AI, e.g. DeepSensor. Participants will also be introduced to some computer vision foundation models hosted on the EO4EU platform, learn how to prepare earth observation data, prompt these models to perform segmentation and object detection tasks and visualise the obtained results using visualisation and GIS tools.

By the end of this tutorial, participants will possess the skills and knowledge needed to harness the power of AI for transformative EO applications using the Pangeo ML ecosystem and EO4EU platform. All the training material will be collaboratively developed and made available online with CC-BY-4 licence. To facilitate user on-boarding the Pangeo@EOSC platform will be made available to participants. However, all the information needed to set up and run the training material on different platforms will be provided too. This tutorial will provide a comprehensive introduction along with hands-on examples to help you understand how these technologies can be used for Earth science data analysis and interpretation.

Tutorial Learning Objectives

By the end of this tutorial, learners will be able to:

  1. Understand the Pangeo ecosystem
  2. Learn to access, load, and analyse data using Xarray, visualising data with Hvplot, and scaling ML workflows with Dask.
  3. Learn to exploit and combine Pangeo tools, methodologies and services to create complex and efficient EO workflows.
  4. Learn about the EO4EU platform.
  5. Computer vision foundation model hands-on.
  6. Learn to use the EO4EU Knowledge Graph tools to discover and use EO data.

Prerequisites

  • Before starting this tutorial, learners should have:
  • Basic knowledge of Python or another programming language;
  • Basic knowledge of geospatial data structures;
  • Basic knowledge of Earth Observation concepts like Copernicus offer and structure;
  • Prior exposure to AI concepts and tools is recommended.
Sun, 7 Jul, 12:30 - 15:30 Greece Time (UTC +2)
Tutorial

Presented by: Brianna Lind (NASA LP DAAC), Dana K. Chadwick (NASA JPL), David R Thompson (NASA JPL), Philip Brodrick (NASA JPL), Christiana Ade (NASA JPL), Erik Bolch (NASA LP DAAC)

Location: MC 3 Hall

Description

The Earth Surface Mineral Dust Source Investigation (EMIT) instrument aboard the International Space Station (ISS) measures visible to short-wave infrared (VSWIR) wavelengths and can be used to map Earth’s surface mineralogy in detail. Here we explore the science behind the EMIT mineralogy products and apply them in a repeatable scientific workflow. We will introduce imaging spectroscopy concepts and sensor specific considerations for exploring variation in surface mineralogy. Participants will learn the basics of VSWIR imaging spectroscopy, how minerals are identified and band depths are calculated, and how band depths are translated into mineral abundances. Participants will also learn how to find, access, and apply EMIT mineralogical data using open source resources.

Tutorial Learning Objectives

In this tutorial, we will explain some of the nuance regarding the spectral library and methods used for mineral identification, show how to orthorectify the data, explain how to interpret band depth, aggregate the targets identified by the classification into the EMIT 10 minerals related to surface dust, and translate band depth into spectral abundance. The EMIT Level 2B Estimated Mineral Identification and Band Depth and Uncertainty (EMITL2BMIN) Version 1 data product provides estimated mineral identification and band depths in a spatially raw, non-orthocorrected format. Mineral identification is performed on two spectral groups, which correspond to different regions of the spectra but often co-occur on the landscape. These estimates are generated using the Tetracorder system(code) and are based on EMITL2ARFL reflectance values. The EMIT_L2B_MINUNCERT file provides band depth uncertainty estimates calculated using surface Reflectance Uncertainty values from the EMITL2ARFL data product. The band depth uncertainties are presented as standard deviations. The fit score for each mineral identification is also provided as the coefficient of determination (r2) of the match between the continuum normalized library reference and the continuum normalized observed spectrum. Associated metadata indicates the name and reference information for each identified mineral, and additional information about aggregating minerals into different categories is available in the emit-sds-l2b repository and will be available as subsequent data products.

Prerequisites

The prerequisites for this tutorial include: a basic familiarity with remote sensing and python, an Earthdata Login account, a GitHub account. All participants need to bring their laptop on the day of event.

Sun, 7 Jul, 12:30 - 15:30 Greece Time (UTC +2)
Tutorial

Presented by: Franz J. Meyer (Alaska Satellite Facility), Heidi Kristenson (Alaska Satellite Facility), Joseph H. Kennedy (Alaska Satellite Facility), Gregory Short (Alaska Satellite Facility), Alexander Handwerger (Jet Propulsion Laboratory)

Location: MC 3.2

Description

Managed by the Jet Propulsion Laboratory (JPL), the Observational Products for End-Users from Remote Sensing Analysis (OPERA; https://www.jpl.nasa.gov/go/opera) project

recently released two products derived from Sentinel-1 (S1) SAR that can accelerate your path to scientific discovery: Radiometric Terrain Corrected (RTC) and Coregistered Geocoded Single Look Complex (CSLC) products. Both are available through the NASA Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) for immediate use.

The RTC-S1 products provide terrain-corrected burst-based Sentinel-1 backscatter at 30-m pixel spacing. Delivered in GeoTIFF format, they are available for all S1 data acquired over land (excluding Antarctica) after October 2023. The OPERA CSLC-S1 products are generated over North America and U.S. Territories, going back to the start of the S1 mission. They are burst-based, fully geocoded SLCs, precisely aligned to a common grid to enable out-of-the-box InSAR processing.

This tutorial will first summarize the properties of these OPERA products including their data formats and burst-based definition. Attendees will be introduced to a range of data discovery and analysis tools that were developed by ASF to make working with OPERA data easy. Attendees will exercise how to discover and access OPERA RTC and CSLC data using ASF’s interactive discovery interface Vertex. We will also demonstrate programmatic access patterns using the open-source asf_search Python search module.

In addition to these traditional tools, attendees will be introduced to selected new distribution and analysis mechanisms available for OPERA: For RTC-S1 data, ASF publishes image services that allow users to interact with OPERA RTCs in web maps or a desktop GIS environment. Attendees will also use ASF services such as mosaicking and subsetting, available for OPERA data through dedicated Python resources. Finally, we will demonstrate OPERA time-series analysis workflows using ASF’s cloud-hosted OpenScienceLab JupyterHub platform.

Tutorial Learning Objectives

  • Understand the data formats, coverage, properties, and applications of OPERA RTC-S1 and CSLC data.
  • Understand how to discover OPERA data using ASF Vertex and the asf_search Python Package.
  • Learn how to use ASF’s Earthdata GIS image services for OPERA RTC-S1 data.
  • Learn how to mosaic and subset OPERA data products using Python tools.
  • Work with OPERA data in the cloud using Jupyter Notebooks in OpenScienceLab

Prerequisites

To follow this tutorial, make sure you have signed up for an Earthdata Login at https://urs.earthdata.nasa.gov/. Attendees will need to bring a laptop able to connect to the internet in order to participate in the hands-on tutorials. Some basic command line and Python skills are helpful for a subset of the demonstrations. We will provide a list of resources referenced during the tutorial. No preparation prior to the course is required.

Sun, 7 Jul, 12:30 - 15:30 Greece Time (UTC +2)
Tutorial

Presented by: Daniele Picone (Univ. Grenoble Alpes, CNRS, Grenoble INP, France) and Mauro Dalla Mura (Univ. Grenoble Alpes, CNRS, Grenoble INP and Institut Universitaire de France (IUF), France)

Location: MC 3.3

Description

The evolution of passive optical remote sensing imaging sensors has significantly expanded the ability to acquire high-resolution imagery for various applications, including environmental monitoring, agriculture, urban planning, and disaster management. Common acquisitions are multispectral and hyperspectral images, providing information on a scene in the visible and infrared domain with spectral channels ranging from a few to hundreds. However, the acquired images often suffer from distortions, noise, and other artifacts, requiring robust image restoration techniques to enhance the spatial and spectral quality of the data. Some examples of these problems include denoising, deblurring, inpainting, destriping, demosaicing, and super-resolution (e.g., pansharpening).

Image restoration in optical remote sensing is a challenging task constantly gathering attention from the community as shown by the large number of techniques that have been proposed in the literature.

Early methods addressed the image restoration problems empirically, by developing ad-hoc strategies to solve the problem at hand. Model-based techniques cast the image restoration as an inverse problem, where the desired product is obtained through Bayesian inference and approached as a variational problem. Recently, data-driven approaches based on deep learning have shown remarkable effectiveness for learning complex observation-reference relationships from large datasets, but often lack interpretability and the capability to generalize to different degradation problems. Hybrid approaches (e.g., plug&play and algorithm unrolling) have started to appear providing both interpretable and effective results.

This tutorial provides a comprehensive overview of image restoration problems in optical remote sensing with a detailed presentation of the main classes of techniques. Theoretical concepts are blended with hands-on practical example to put in practice the concepts described in the tutorial. Participants will gain insights into the challenges specific to optical imagery in remote sensing and learn how to leverage classical to advanced algorithms to address issues ranging from denoising to image sharpening.

Tutorial Learning Objectives

The main goals of this tutorial can be summarized as follows:

  • Introduce the fundamentals of optical remote sensing imagery and sensors focusing on image formation and sources of degradation in optical remote sensing acquisitions
  • Present the classes of image restoration and reconstruction problems mainly found in optical remote sensing (problem statement and their challenges) and the principles of image restoration and reconstruction
  • Give a comprehensive overview of the main image restoration approaches (e.g., principles and main families of techniques) in a common framework and present in details some representative methods
  • Guide the user to the best practices for addressing image restoration problems, including data loading, pre-processing, the set-up of the solvers and the quality assessment of the obtained results (quality metrics and practices in comparison and validation)
  • Provide a comparison between different techniques with practical examples from remote sensing acquisitions
  • Gain familiarity with the implementation of different approaches for image restoration through practical experiments, in order to make the participants immediately operative and to integrate the tools in their workflows.

Prerequisites

Basic knowledge of remote sensing concepts and image processing fundamentals is recommended. Familiarity with Python is useful for the practical session. Working examples will be provided as Jupyter notebooks. The participants are invited to bring their laptop equipped with a recent Python installation. The Python libraries needed will be shared before the tutorial.

Sun, 7 Jul, 16:00 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: Antonio Iodice (Università di Napoli Federico II, Italy) and Gerardo Di Martino (Università di Napoli Federico II, Italy)

Location: MC 3.2

Description

The estimation of wind speed and sea-state parameters is of fundamental importance in weather forecasting and environmental monitoring, as well in support of ship traffic monitoring as ancillary information. Microwave remote sensing instruments play a key role in the estimation of physical parameters from large-scale ocean observations. This is based on the availability of direct scattering models, able to describe the interaction between the incident electromagnetic field and the sea surface, thus providing meaningful relationships between the measured return and the physical parameters of interest.

The modeling of the electromagnetic return from the sea surface requires, as a first fundamental step, appropriate descriptions of the surface itself. Indeed, the complexity of the sea surface multi-scale behavior can be well characterized as a superposition of waves with different wavelengths, each generated according to different physical mechanisms (basically, wind forcing and combinations of gravity and water surface tension). It is convenient and customary modeling this surface as a random process, which can be conveniently described through different statistical parameters, such as root-mean-square (rms) height, rms slopes, and power spectral density (PSD). Indeed, several PSD models have been developed over the years, aimed at capturing the main multiscale features of the sea surface.

Once an appropriate description of the surface is available, one can move to the second step, i.e., the evaluation of the scattered field, which for the randomly rough sea surface can be obtained through approximate analytical solutions, under the Kirchoff approach (KA) (e.g., geometrical optics and physical optics approximations) and the Small Perturbation Method (SPM). However, these models present a limited validity range in terms of bistatic configurations and surface roughness, so that more advanced models have been developed and are frequently used, such as the two-scale model (TSM) or the Small Slope Approximation (SSA). However, standard TSM and SSA require the numerical evaluation of possibly multi-dimensional integrals. Fully analytical evolutions for TSM and SSA have been recently developed by the speakers of this tutorial.

This tutorial provides a concise, but complete, description of the abovementioned topics. In particular, the main sea surface statistical descriptors and PSD models are discussed. The main details of the models for the evaluation of electromagnetic scattering from rough surfaces, in general, and the sea surface, in particular, are provided, with specific care to the application scenario and validity limits of each model. The general bistatic configuration is considered, thus paving the way for applications to a wide set of microwave sensors, such as Synthetic Aperture Radar (SAR), altimeter, scatterometer, and Global Navigation Satellite Systems reflectometry (GNSS-R): the basic characteristics of each of these sensors will be also briefly discussed. The hands-on part will focus on the inversion of physical parameters: relevant Matlab code and sample data will be provided to the audience.

Tutorial Learning Objectives

After attending this tutorial, participants should have an understanding of:

  • The statistical description of natural rough surfaces, including sea surfaces.
  • Relationship between sea surface statistics (spectrum, rms height, rms slopes) and wind speed and direction.
  • Fundamental physics of bistatic scattering of electromagnetic waves form rough surfaces.
  • Basic theory and validity ranges of main approximate scattering models (Kirchhoff approximation, small perturbation method, small slope approximation, two-scale model) and their application to scattering from the sea surface.
  • Relationship between scattered field and sea surface statistics (and, hence, wind speed and direction).
  • Main sensors employed for remote sensing of the sea (altimeter, scatterometer, synthetic aperture radar, global navigation satellite system reflectometry).
  • Use of sea scattering models for the retrieval of significant wave height, wind intensity, wind direction, sea spectrum via remote sensing techniques.

Prerequisites

Suitable for PhD students, research engineers, and scientists. Basic knowledge of electromagnetics and of random variable and processes would be useful. In the "hands-on" part, Matlab codes will be provided to participants, so that it would be helpful for participants to bring their own laptop. There are no special requirements.

Sun, 7 Jul, 16:00 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: James L. Garrison (Purdue University), Adriano Camps (Universitat Politechnica de Catalunya (UPC) and Estel Cardellach (Spanish National Research Council - ICE-CSIC, IEEC)

Location: MC 3.4

Description

Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), ie., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds) and soil moisture.

Extensive sets of airborne GNSS-R measurements have been collected over the past 20 years. Flight campaigns have included penetration of hurricanes with winds up to 60 m/s and flights over agricultural fields with calibrated soil moisture measurements. Fixed, tower-based GNSS-R experiments have been conducted to make measurements of sea state, sea level, soil moisture, ice and snow as well as inter-comparisons with microwave radiometry.

GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). HydroGNSS to be launched in 2024 will use dual-frequency and dual-polarized GNSS-R observations with principal science goals addressing land surface hydrology (soil moisture, inundation and the cryosphere). Availability of spaceborne GNSS-R data and the development of new applications from these measurements, is expected to increase significantly following launch of these new satellite missions and other smaller ones (ESA’s PRETTY and FFSCAT; China’s FY-3E; Taiwan’s FS-7R).

Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from VHF (137 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture have been demonstrated with SoOp.

This half-day tutorial will summarize the current state of the art in physical modeling, signal processing and application of GNSS-R and SoOp measurements from fixed, airborne and satellite-based platforms.

Tutorial Learning Objectives

After attending this tutorial, participants should have an understanding of:

  • The structure of GNSS signals, and how the properties of these signals enable remote sensing measurements, in addition to their designed purpose in navigation.
  • Generation and interpretation of a delay-Doppler map.
  • Fundamental physics of bistatic scattering of GNSS signals form rough surfaces and the relationship between properties of the scattered signal and geophysical variables (e.g. wind speed, sea surface height, soil moisture, ice thickness)
  • Conceptual design of reflectometry instruments.
  • Basic signal processing for inversion of GNSS-R observations.
  • Current GNSS-R satellite missions and the expected types of data to become available from them.

Prerequisites

Basic concepts of linear systems and electrical signals. Some understanding of random variables would be useful.

Sun, 7 Jul, 16:00 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: Mihai Datcu (POLITEHNICA București)

Location: MC 3.3

Description

Climate change models describe phenomena at scales of thousands of kilometers, for many decades. However, adaptation measures shall be applied at human activities scales, from 10m to 1km and from days to months. It is in the scope of the tutorial to promote the opportunities offered by the availability of Big EO Data, with a broad variety of sensing modalities, global coverage, and more than 40 years of observations in synergy with the new resources of AI and Quantum computing. The concept is in line with the “Destination Earth'' initiative (DestinE) that promotes the use of Digital Twins (DT) as actionable digital media, in support of adaptation measures. The DTs implement a virtual, dynamic models of the world, continuously updated, enabling simulations while providing more specific, localized and interactive information on climate change and how to deal with its impacts. These DTs are tools to largely interact with people raising awareness and amplify the use of existing climate data and knowledge services, for the elaboration of local and specific adaptation. That is a step towards a citizen driven approach with an increased societal focus. The tutorial introduces a concept of a federated interactive systems of DTs that provides for the first time an integrated view of how climate-change phenomena impact human activities and support adaption measures.

Tutorial Learning Objectives

The digital and sensing technologies, i.e. Big Data, are revolutionary developments massively impacting the Earth Observation (EO) domains. While, Artificial Intelligence (AI) is providing now the methods to valorize the Big Data. The presentation covers the major developments, of hybrid, physics aware AI paradigms, at the convergence of forward modelling, inverse problem and machine learning, to discover causalities and make prediction for maximization of the information extracted from EO and related non-EO data. The tutorial explains how to automatize the entire chain from multi-sensor EO and non-EO data, to physical parameters, required in applications by filling the gaps and generating a relevant, understandable layers of information. Today we are the edge of a Quantum revolution, impacting technologies in communication, computing, sensing, or metrology. Quantum computers and simulators are and continuously become largely accessible. Thus, definitely impacting the EO domains. In this context the tutorial puts the bases of information processing from the perspective of the quantum computing, algorithms and sensing. The presentation will cover an introduction in quantum information theory, quantum algorithms and computers, with the first results analyzing the main perspectives for EO applications.

The tutorial will cover the following main topics

  • High resolution EO imaging for climate change adaptation
  • Digital Twin Earth: architectures of federated interactive systems
  • Physics Guided Hybrid AI methods for EO
  • AI for Satellite Image Time Series and prediction
  • Teleconnections and causality
  • The potential of Quantum ML algorithms for EO
  • Applications and use cases for climate change adaptation

Prerequisites

The tutorial is addressing MS, PhD students or scientists with background in EO and geosciences and elementary knowledge of ML/DNN methods.

Sun, 7 Jul, 16:00 - 19:00 Greece Time (UTC +2)
Tutorial

Presented by: Ioannis Prapas (National Technical University of Athens & National Observatory of Athens), Spyros Kondylatos (National Technical University of Athens & National Observatory of Athens), Nikolaos-Ioannis Bountos (National Technical University of Athens & National Observatory of Athens), Maria Sdraka (National Technical University of Athens & National Observatory of Athens), and Ioannis Papoutsis (National Technical University of Athens & National Observatory of Athens)

Location: MC 3 Hall

Description

Deep Learning (DL) provides significant potential for advancing natural hazard management, yet its implementation poses certain challenges. First, the training of DL models requires the meticulous handling of big Earth Observation datasets. Second, the rare occurrence of natural hazards results in skewed distributions, limiting the availability of positive labeled examples and significantly hampering model training. Moreover, the demand for dependable, trustworthy, uncertainty-aware models for operational decision-making during such critical events further escalates the complexity of this endeavor. This tutorial aims to provide participants with practical tools and theoretical insights necessary to navigate and surmount these obstacles effectively. The primary objectives of this tutorial are:

  • Provide participants with actionable insights and hands-on experience in managing data pertinent to natural hazard issues, focusing on the access and manipulation of spatio-temporal data cubes.
  • Deliver a comprehensive theoretical framework outlining the fundamental challenges and potential solutions in leveraging Deep Learning for natural hazards applications.
  • Facilitate hands-on sessions that simulate real-world scenarios, such as rapid flood mapping and response, enabling participants to apply their learning in practical contexts.

All the sessions will be hands-on and accompanied by jupyter notebooks. The tutorial is organized as follows:

  • Spatiotemporal Datacubes for Earth System Modeling:
    • Guidelines on accessing and handling of spatio-temporal datacubes
    • Creation of DL datasets from spatiotemporal datacubes
    • Application of DL pipelines for forecasting problems
    • Common pitfalls and insights on spatio-temporal forecasting
  • DL use-cases for disaster management:
    • Rapid flood mapping using Synthetic Aperture Radar timeseries
    • Wildfire forecasting using Bayesian/uncertainty-aware methods

Tutorial Learning Objectives

Offer hands-on experience with Python Jupyter notebooks focused on disaster management. Practical knowledge on handling and accessing spatiotemporal datacubes. Offer clear guidelines for addressing key challenges in Deep Learning for Natural Hazards. Understanding of challenges in the use of DL for natural hazards management, including label & data scarcity, class imbalances, and noisy labels. Best practices and actionable tips for spatio-temporal forecasting using Earth Observation data. Demonstrate usage in real-life applications of DL, such as wildfire forecasting and flood mapping. Provide practical experience with advanced ML concepts, including self-supervised learning and Bayesian/uncertainty-aware models.

Prerequisites

We assume basic knowledge of Python and Deep Learning. Ideally, participants will have previous experience with a Deep Learning framework (e.g. pytorch, tensorflow). The attendees are required to bring a laptop and a have a google colab account.