Skip to content

Latest commit

 

History

History
19 lines (10 loc) · 1.76 KB

README.md

File metadata and controls

19 lines (10 loc) · 1.76 KB

Chapter 12: Explainable AI for understanding ML-derived vegetation products

Explaining Machine Learning Decisions for Improved Understanding

Introduction

Machine learning has made remarkable progress in developing autonomous systems that can perceive, learn, predict, and act independently. However, one significant limitation of these systems is their inability to explain their decisions and actions to human users, hindering their effectiveness. In this beginner-friendly chapter, we dive into the world of explainable artificial intelligence (XAI) to address this crucial challenge.

LANDFIRE Use Case

Using the U.S. Geological Survey's LANDFIRE Existing Vegetation Type (EVT) as a case study, we explore how XAI techniques can be applied to black-box models. These models, while powerful in their predictive capabilities, lack transparency and make it difficult for humans to understand the reasoning behind their decisions. By applying XAI, we examined the inner workings of these black-box models, shedding light on the algorithmic paths they take and the factors influencing their predictions.

Moreover, this chapter goes beyond theory and equips scientists and analysts with practical tools to enhance their understanding and trust in the predictions of vegetation types. These tools streamline the development of the LANDFIRE EVT product, enabling more informed decision-making and fostering confidence in the outcomes.

What it contains

Join us to demystify the inner workings of machine learning models and empower users to grasp the logic behind their decisions. By embracing explainable artificial intelligence, we pave the way for improved transparency, accountability, and trust in the realm of autonomous systems.

no code is publicly available for this chapter yet