Events

Webinar Series in Spatial Data Science 2022

Técnico - Campus Alameda

Closing session - In Memoriam: An overview of the many contributions of José Manuel Bioucas-Dias to remote sensing. (12.30 - V0.15 room)

CERENA will organise the “Webinar Series in Spatial Data Sciences …think spatially about your data science problems”, from 11th May to 8th June 2022. The webinars will take place, by Zoom, on Wednesdays, at 12:30 p.m. (WEST).

The closing session will be held in-person, on June 29, in memoriam José Manuel Bioucas-Dias.

Spatial data science methods are now an essential part of the IT core business of big tech companies like Google, Uber, and Amazon or integrated into a vast business area through SDS apps (some examples: Meight Driver, Passworks, Hole19, GIRA).

This webinar cycle covers different methods and applications of Spatial Data Science.

Attendance is free. Registration is required. The Zoom link will be sent by email to registered participants.

Program

  • May 11, 2022 – 12.30
    From Earth Observation to Artificial Intelligence: a journey towards urban climate adaptation.
    Ana Oliveira, Vasco Leal, Manuel Galamba, Rita Cunha, António Lopes, Ezequiel Correia, Amilcar Soares and Samuel Niza.
  • May 18
    Subsoil images reconstruction by using Generative Adversarial Networks.
    Helga Jordão, Leonardo Azevedo and Amilcar Soares.
  • May 25
    Interacting with Remote Sensing Imagery Through Natural Language.
    Bruno Martins and João Daniel Silva.
  • June 1, 2022
    Passive Wi-Fi monitoring in the wild: a long-term study across multiple location typologies.
    Miguel Ribeiro, Nuno Jardim Nunes, Valentina Nisi and Johannes Schoning.
  • June 8
    Back to basics: denoising Poissonian images using optimal linear estimation.
    Mário Figueiredo

Abstracts

  • From Earth Observation to Artificial Inteligence: a journey towards urban climate adaptation.
    As climate change prospects point towards the pressing need for local-scale adaptation measures, heat exposure becomes one of the key aspects in determining the health of the urban environment. In addition, many western metropolises are characterized by an ageing population which may lead to an increased community-level sensitivity to heat extremes – that is the case in many European Functional Urban Areas (FUAs), including in the Greater Lisbon (hereinafter Lisbon) area. Lisbon has already a track record of being regularly exposed to severe heatwaves (HW), and regional climate change prospects point to its aggravation in coming decades (frequency, duration, and severity), as with other Southern European cities. Accordingly, there is a pressing need to pinpoint the urban locations where people are relatively more exposed to the excess heat, which can lead to dehydration, cerebrovascular accidents or thrombogenesis.
    In this study, air temperature measurements from citizen-owned meteorological stations is retrieved from open data platforms, quality controlled and co-located with Earth Observation (EO) data and products to downscale the official air temperature forecasts (from deterministic numerical weather predictions, NWP) from the native regional scale (2.5km) up to a metric spatial resolution (200m). As the NWP model resolves the regional physical processes, the Machine Learning (ML) high-resolution output is able to adjust its bias to the specificities of the urban location, by accurately predicting the local contribution of the urban heat island (UHI) effect, quantifying the heat anomaly at the neighbourhood scale. The cooling effect of the urban green infrastructure is also detected, providing mensurable scenarios to support future urban greening initiatives. In addition, with these results, the identification of short-term critical areas during heatwave events becomes possible, supporting the local public health stakeholders in their decision-making – i.e., regarding where and when to act.
  • Subsoil images reconstruction by using Generative Adversarial Networks.
    The inevitable energy transition (Paris Agreement on Climate Change) will necessarily imply the increase demand for mineral raw materials. New discoveries of metal resources will tend to be in near future in deeper and complex geological environments.
    In these environments, the characterization of the spatial domain of different geological bodies has been one of the most important challenges in the assessment of mineral resources and uncertainty. The current practice of geological spatial domain characterization has severe limitations: the models, usually applied for that purpose, are deterministic, driven by expert geological guesses and control, hence with no uncertainty and risk attached to the predictions; the characterization of these models is extremely time-consuming, which is a severe limitation for the fast integration of new data and updating of the models.
    We propose herein a machine-learning model that does not intend to access the unknown reality, and therefore the uncertainty associated with this process, but rather to mimic the process of expert interpretation of the spatial geological domains. For this purpose, we implement a deep learning method, a Generative Adversarial Network, for automatic delimiting the geological domains conditioned on the available data (drill-holes). The results show that this approach effectively mitigates the main limitations of the traditional practice of ore types modeling. The proposed method is applied to a real case study, from a copper and zinc sulfide deposits in the south of Portugal.
  • Interacting with Remote Sensing Imagery Through Natural Language
    Image captioning and Visual Question Answering (VQA) are both exciting problems that combine natural language processing and computer vision techniques, currently attracting a significant interest. Some previous efforts have looked into these tasks in the context of remote sensing imagery, which can provide a framework to extract generic information from earth observation data. One can for instance imagine a natural language interface to a system such as Google Earth, allowing users to ask questions about the presence and quantity of particular objects within an aerial scene, or to ask for a description of the contents of the scene. State-of-the-art approaches for addressing these problems are based on deep neural networks, with systems often following architectures that feature a convolutional encoder for the image contents, and a recurrent encoder (for encoding the question in VQA) or decoder (for captioning). This talk will describe models based on an alternative architecture, replacing or complementing the convolutional and recurrent components with self-attention operations (i.e., models based on the Transformer architecture). The talk will present and discuss particularities of the VQA/captioning tasks in the domain of remote sensing imagery, it will present models based on self-attention for addressing these tasks, and it will discuss experimental results together with limitations on the datasets currently used for assessing system performance.
  • Passive Wi-Fi monitoring in the wild: a long-term study across multiple location typologies.
    In this talk, we present a systematic analysis of large-scale human mobility patterns obtained from a passive Wi-Fi tracking system, deployed across different location typologies. We have deployed a system to cover urban areas served by public transportation systems as well as very isolated and rural areas. Over 4 years, we collected 572 million data points from a total of 82 routers covering an area of 2.8 km2. In this talk we describe a systematic analysis of the data and discuss how our low-cost approach can be used to help communities and policymakers to make decisions to improve people’s mobility at high temporal and spatial resolution by inferring presence characteristics against several sources of ground truth. Also, we present an automatic classification technique that can identify location types based on collected data.
  • Back to basics: denoising Poissonian images using optimal linear estimation.
    Denoising Poissonian images is a longstanding problem in image processing, with many applications in medical imaging, remote sensing, astronomical imaging, and other areas. The key difficulty in Poisson denoising is that the noise is not independent from the signal. Many methods have been proposed for this task in the past decades, using different tools, from wavelets to patch-based methods, and, more recently, deep neural networks. In this talk, I will describe how a basic tool in statistics, optimal linear estimation, can be used to boost any Poisson denoiser, achieving performance on par with the deep-learning-based state-of-the-methods, at a small fraction of the computational cost and without any training.
    This was joint work with my late colleague José Bioucas-Dias and our student Milad Niknejad.

 Closing session (flyer)

  • June 29, 12.30 – Venue: Civil engineering building, room V0.15.
    In Memoriam: An overview of the many contributions of José Manuel Bioucas-Dias to remote sensing.
    Mário Figueiredo