The following abstracts have been accepted for the Spatial AI Challenge 2025-26, hosted on the I-GUIDE Platform. These submissions exemplify innovation in AI-ready spatial data, machine learning models, and applications, while promoting FAIR data principles and open science practices.
We congratulate all the authors on their successful submissions and look forward to their contributions to advancing spatial AI and open science. Participants are reminded of the upcoming deadlines, below. For more information, please consult the FAQ.
- Initial Submission Deadline: March 15, 2026
- Final Submission Deadline: April 15, 2026
- Announcement of Winners: May 15, 2026
A Multimodal, Physically Informed Approach to Fine-Scale Wildfire Fuel Mapping in Alaska
Team Member:
- Chenyan Lu, Arizona State University
Summary: Wildfires are increasing in frequency and intensity due to climate and land-use changes, posing major threats to ecosystems and communities. Interior Alaska’s 78 million hectares of boreal forest is especially vulnerable. Accurate, high-resolution wildfire fuel mapping is essential for effective risk assessment and fire spread prediction. However, most existing fuel maps rely on coarse 30-meter Landsat data, limiting their utility for capturing fine-scale spatial variation and Wildland-Urban Interface (WUI) dynamics. These maps and their development rulesets do not transfer well to higher-resolution imagery. To address these limitations, this project proposes a high-precision fuel mapping pipeline for Alaska using multi-source imagery, with built-in quality control and uncertainty assessment. The approach supports near-real-time mapping and broader applicability without heavy reliance on site-specific knowledge, enhancing the practicality of fuel maps for localized planning and infrastructure-level decision-making.
An Integrated GeoAI Framework for Instance-Level Disaster Impact Assessment Using Mutimodal Remote Sensing and Demographic Exposure Modeling
Team Members:
- Jikun Liu, Texas A&M
- Xihan Yao, University of Texas at Austin
- Akhil Anil Rajput, Texas A&M
- Asritha Bugada, North Carolina State University
Summary: This project introduces a building-centric spatial AI pipeline for assessing disaster impacts on infrastructure and populations. Using satellite imagery, structural delineation, probabilistic damage modeling, and demographic downscaling, the system estimates damage and affected populations at the building level. Initial results show stable footprint extraction and reasonably calibrated damage scores, though improvements are ongoing. Demographic proxies align with Census data when aggregated, but rules require refinement. A wildfire case study demonstrated end-to-end processing, identifying key areas for algorithmic improvement. The pipeline adheres to FAIR data principles and emphasizes reproducibility through open models and datasets. Future work will explore alternative modeling approaches, refine demographic scaling using ground truth, and expand uncertainty quantification. The ultimate goal is to provide a transparent, scalable, and reproducible framework that supports emergency response, climate resilience, and equitable planning.
Automated 25-m Local Climate Zone Mapping for CONUS with a Hierarchical Spatial-AI Framework
Team Members:
- Didarul Islam, University of Idaho
Summary: This project introduces a scalable Spatial-AI workflow for producing 25 m resolution Local Climate Zone (LCZ) maps using open multisensor data and a two-stage hierarchical model. Training samples are automatically generated using rule-based methods that integrate GEDI height metrics, building footprints, nighttime lights, population, and optical/SAR indices, minimizing manual labeling and improving consistency across regions. A global Random Forest model learns diverse LCZ patterns, while local tile-based models refine predictions with spatial context. Outputs are mosaicked into seamless layers suitable for urban heat mitigation, air quality analysis, and environmental planning. A 25 m LCZ map of the conterminous U.S. demonstrates national-scale application. The method is reproducible, cloud-deployable, and improves accuracy, generalizability, and spatial coherence. It provides a practical, data-driven foundation for climate-ready urban analytics and decision-making in cities worldwide.
Earth4D: Multi-Resolution Hash Encoding of 4D (x, y, z, t) Spacetime Coordinates
Team Members:
- Lance Legel, ecodash.ai
- Qin Huang, Arizona State Universitiy
- Brandon Voelker, University of Houston
- Daniel Neamati, Stanford University
Summary: Earth4D is a novel 4D spatiotemporal encoder designed for planetary-scale deep learning using Earth observation data. It extends NVIDIA’s multi-resolution hash encoding to four dimensions (x, y, z, t), with embedding tables for spatial coordinates (ECEF) and three orthogonal spatiotemporal combinations (xyt, yzt, xzt). Earth4D is implemented as a modular PyTorch module, supporting integration with deep learning models at 0.1-meter spatial and 1-hour temporal precision across a 200-year range. Despite having 200 million parameters, it trains efficiently using just 4 GB of GPU memory. Applications include predicting Live Fuel Moisture Content and global 64-dimensional embeddings, achieving high accuracy and low error rates. Earth4D’s learned positional encodings enhance performance across tasks like climate modeling, ecology, urban planning, and disaster response. Open-source and scalable, Earth4D provides a robust foundation for spatiotemporal AI in geospatial and Earth sciences.
GeoAgent4Disaster: An Autonomous Multi-Agent Framework for Multimodal Disaster Assessment
Team Members:
- Wenjing Gong, Texas A&M
- Xinyue Ye, University of Alabama
- Lei Zou, Texas A&M
- Yifan Yang, Texas A&M
- Kelly Zhang, Texas A&M
Summary: Accurate, rapid disaster damage assessment is essential for emergency response, yet current models are slow, data-intensive, and limited in scope. They often require extensive annotation, retraining, and rely on single-event or single-modal data, hindering scalability and real-time applicability. This project introduces GeoAgent4Disaster, a multi-agent GeoAI framework for hyperlocal, interpretable, real-time disaster assessment. Leveraging recent advances in vision-language foundation models, it enables zero- or few-shot interpretation using satellite imagery, street-view photos, social media, and text. Autonomous agents coordinate perception, spatial reasoning, and data integration to generate actionable situation reports during the critical “golden 36 hours.” The project investigates how multimodal agents can deliver cross-perspective damage understanding, how they integrate spatial logic, and how GeoAI can produce reliable, on-the-fly insights to support equitable, efficient disaster response.
GeoSocial Downscaling of Urban Mobility: A Spatial AI Framework for High-Resolution Service Accessibility
Team Members:
- Rafael Albuquerque
- Jessica Miranda
- Siqin Wang, University of Southern California
- Vinicius Brei, Federal University of Rio Grande do Sul, Brazil
Summary: The GeoSocial Downscaling Model (GSDM), a Spatial AI framework, reconstructs fine-scale (~500 m) accessibility patterns from coarse human mobility data. Tested in São Paulo, GSDM uses a physics-guided U-Net architecture to downscale aggregated mobility flows while enforcing macro-micro consistency via physical constraints. It integrates national mobility metrics with socioeconomic and built-environment covariates, without relying on social media or behavioral traces. Validation includes aggregate consistency, distributional fidelity, and spatial error analysis. Outputs include high-resolution accessibility surfaces, containerized workflows, pre-trained models, and a Spatial AI Model Card with documentation and ethical guidance. All workflows are executed on the I-GUIDE Platform using Jetstream2 GPUs. GSDM provides actionable insights for resilience planning, environmental justice, and equitable service provision, addressing a critical gap in micro-scale urban accessibility modeling and directly advancing I-GUIDE’s mission of reproducible, spatially informed AI for social good.
Mapping and Predicting Geographical Locations for Successful Market Entry
Team Members:
- Devika Jain, Harvard University
- Jaiany Rocha Trindade, Federal University of Rio Grande do Sul, Brazil
- Vinicius Brei, Federal University of Rio Grande do Sul, Brazil
Summary: This project develops a Spatial AI-driven Bayesian hierarchical model to identify geographically favorable locations for business success, with an initial pilot in Brazil. Traditional market-entry frameworks focus on market size but often overlook spatial and demographic conditions critical to firm survival, especially in developing or uneven regions. The model integrates municipal-scale open data on demographics, infrastructure, economic indicators, and competition, alongside OpenStreetMap features and pretrained satellite embeddings to ensure computational efficiency. Using Conditional Autoregressive (CAR) priors, the model captures spatial dependence and latent geographic risks, producing probabilistic rankings of areas most conducive to firm entry. Designed for generalization, this scalable, interpretable framework offers a blueprint for inclusive economic development across diverse contexts, supporting evidence-based strategies for resilience, opportunity mapping, and spatial equity. The approach aligns with the Spatial AI Challenge’s goals of open, transferable, and impactful geospatial analytics.
Physics-Informed Neural Networks for Estimating Irrigation Volume and Recharge from Earth Observations
Team Members:
- Esmaeel Adrah, Kent State University
- Daniel Dominguez, Colorado State University
Summary: This project uses spatial AI and satellite Earth Observation to estimate the total volume of water applied for irrigation, addressing a critical gap in sustainable water resource management. Irrigation is the largest agricultural water use, yet total withdrawal volumes and their contributions to groundwater recharge remain poorly quantified especially in developing regions where irrigation supports food and water security. Existing remote sensing methods often miss deep percolation losses. To overcome this, the project integrates satellite embeddings, spatial AI models, and grid-based geospatial hydrological frameworks. Initial results show strong performance in estimating irrigation volumes, especially over non-irrigated areas. Upcoming validation at two Colorado pilot sites will refine deep percolation estimates. The model enables large-scale irrigation monitoring, enhancing understanding of human-environment interactions and supporting evidence-based water sustainability planning across socio-economic and environmental contexts.
Probabilistic Flood Inundation using Physics-Aware Spatial AI
Team Members:
- Jibin Joseph, Purdue University
- Venkatesh Merwade, Purdue University
Summary: Flood disasters are increasing in both frequency and impact, yet reliable inundation products remain sparse outside instrumented reaches. This challenge leverages spatial Al to learn the relationships between hydrologic forcing, topography, and remotely sensed water extent, and to generate transparent, probabilistic inundation maps with quantified uncertainty. We propose a spatial Al framework for rapidly producing high-resolution flood inundation maps (10 m) that can be easily updated and reused across river basins. In this challenge, discharge records, height-above-nearest-drainage (HAND) rasters, digital elevation models (DEMs), and Sentinel-based surface-water observations are fused into an Al-ready dataset for diverse flood events across selected river reaches in the United States. A physics-aware convolutional model then ingests static geospatial layers and a target flow to output pixel-wise inundation probability and corresponding inundation maps.
SMARTS: A Synchronous Multi-Agent Reinforcement Learning-driven Transit Scheduler
Team Members:
- Luyu Liu, Auburn University
- Md. Muzzamil Khattak, Auburn University
Summary: Traditional scheduling methods struggle with volatile demand and dynamic network conditions. SMARTS, a Synchronous Multi-Agent Reinforcement Learning-based Transit Scheduler for real-time, adaptive transit operations, addresses this by simulating transit topologies, traffic, and population density, using a Graph Neural Network with Transformer encoder-decoder architecture. Each transit route is managed by an agent receiving individualized rewards, improving decision quality and coordination. Trained via Proximal Policy Optimization, SMARTS outperforms traditional methods, even without fine-tuning, by reducing average passenger wait times to six minutes, deploying fewer vehicles, and improving fleet utilization by up to 30%. The system generalizes across network types, offering a scalable, interpretable framework for autonomous, graph-based transit scheduling. This approach improves both efficiency and equity in urban mobility planning, paving the way for more responsive, data-driven public transit systems.
Urban Retrofitting Detection with Vision Transformer: Where, When, and What
Team Members:
- Fangzheng Lyu, Virginia Tech University
- Raj Bhattarai, Virginia Tech University
Summary: Urban retrofitting, such as modifying existing areas for greater sustainability, is essential for climate mitigation, yet remains difficult to detect due to its subtle, microscale nature. This project introduces an AI-driven framework that uses high-performance computing and urban big data to quantify and analyze retrofitting. By integrating large-scale street view imagery and socioeconomic data, a Vision Transformer model identifies when, where, and what types of retrofitting occur. Transfer learning enables application across cities globally. The project also assesses retrofitting’s climate impact by tracking changes in urban heat island intensity and evaluates social equity by examining disparities in access and benefits across neighborhoods. Ultimately, this work provides scalable tools to support data-informed, equitable, and climate-resilient urban planning, offering valuable insights to policymakers and practitioners navigating sustainable urban transformation worldwide.