Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential bug in writing DREAM posterior to disk #313

Open
bdsegal opened this issue Jan 24, 2024 · 3 comments
Open

Potential bug in writing DREAM posterior to disk #313

bdsegal opened this issue Jan 24, 2024 · 3 comments

Comments

@bdsegal
Copy link

bdsegal commented Jan 24, 2024

Hi all,

Thank you for writing this useful package.

While experimenting with spotpy, my colleagues and I may have encountered a bug in the implementation of the DREAM algorithm, described below. It would be great to get your eyes on this potential bug, as it might lead some users to mistakenly use the incorrect posterior draws. Thank you in advance, and please let me know if I'm missing anything.

Potential bug: On line 353 of dream.py, within each MC iteration parameter proposals are passed to self.postprocessing(). The postprocessing() function returns the likelihood, but it also saves parameters to disk as a side effect. In particular, on line 481 of _algorithm.py, postprocessing() saves the parameters that are passed to it to disk. Per the documentation on line 45 of dream.py I believe this is the same file that a user can then import afterwards. The problem is that the proposed parameters are passed to postprocessing() regardless of whether the proposal is accepted or rejected. So while the parameter values for each MC iteration are correctly stored in the dream object (see lines 390-399 of dream.py) the results saved to disk contain all parameter proposals but not necessarily draws representing the posterior (i.e. the results stored in self.bestpar).

Question: Is the above assessment correct, and if so, is that the intended behavior? I'm worried that some users may expect the values written to disk to be the posterior as opposed to all parameter proposals.

Thanks,
Brian

cc @para2x, @dlebauer, @infotroph

@thouska
Copy link
Owner

thouska commented Jan 26, 2024

Hi @bdsegal,
thank you for your detailed report and for using spotpy.

The design of saving all results to disk is indeed intended. This behaviour does not affect any of the internal samplings. Also, it does not affect the posterior distribution, which is by the original design of the dream algorithm only generated after convergence of the algorithm. At least that's what I think after double checking the code, based on your report.

How many runs are performed after convergence can be defined by the user and is indeed important to use only this set number of runs as part of the results afterwards, if you want to analyse the posterior distribution. I hope this is understandable when following the dream Tutorial? If not, I am happy about any suggestions for improvement.

@bdsegal
Copy link
Author

bdsegal commented Jan 26, 2024

Thank you @thouska for your answer and for pointing us to the tutorial. The tutorial is very helpful.

if I'm understanding correctly, it looks like the tutorial uses the post-convergence proposals to analyze the posterior, as opposed to the post-convergence results of the accept/reject decisions (i.e. the values that are added to self.bestpar). Is that right? We would like to use latter, and it looks like we can directly access those values in self.bestpar, though please let me know if you would recommend another approach.

My impression is that using the results from the accept/reject decisions as opposed to the raw proposals is in line with Vrugt (2016), which you reference in the dream tutorial, though please let me know if I'm misunderstanding.

I'm also not sure if it is necessary to discard all pre-convergence draws, since R-hat measures how well the chains are mixing up to the current iteration (potentially removing the burn-in period as is the default behavior in rstan's monitor function), though I do acknowledge that Vrugt recommends discarding the pre-convergence period as the warmup. It's great that the current output of spotpy provides flexibility in how the results can be used.

@bdsegal
Copy link
Author

bdsegal commented Sep 24, 2024

Hi @thouska. To follow up on this issue, I'm attaching a short example to demonstrate why it's important to use the results of the accept/reject decision, as opposed to all proposals, even after convergence. This example uses a simple Metropolis algorithm, but the principle also applies to more complicated algorithms like DREAM. The takeaway is that even after convergence, the proposals aren't necessarily samples from the target distribution; only the results of the accept/reject decisions are samples from the target distribution.

I'm attaching a script and plots showing the distribution of post-burn-in samples (results of the accept/reject decision) vs all post-burn-in proposals for three target distributions, all using a standard normal distribution for proposals. If the proposal and target distribution are nearly the same, then the post-burn-in proposals would be a good representation of the target distribution, but otherwise they are not. For this reason, I would recommend that by default spotpy use the results of the accept/reject decision for downstream processing as opposed to all proposals.

Thanks, and please let me know if you have any questions.

narrow_normal_plots

wide_normal_plots

lognormal_plots

"""This script demonstrates the problem with using all proposals in a metropolis algorithm
as opposed to the results of the accept-reject decisions, the latter of which are samples
from the target distribution after the algorithm has reached a stationary state.
"""
from typing import Tuple

import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
from pathlib import Path
from tqdm import tqdm

# All examples below use the 100,000 iterations, the first half of which are discarded for burn-in,
# and a standard normal distribution for making proposals.
NUM_ITERATIONS = 100_000
BURN_IN_PORTION = 0.5
PROPOSAL_DIST = stats.norm(loc=0, scale=1)

# Output directory for plots; please update as needed
OUTPUT_DIR = Path.home()


def metropolis_algorithm(
    target_dist: stats.rv_continuous,
    initial_state: float,
    proposal_dist: stats.rv_continuous = PROPOSAL_DIST,
    num_iterations: int = NUM_ITERATIONS,
    burn_in_portion: float = BURN_IN_PORTION,
    seed: int = 1234,
) -> Tuple[list, list]:
    """This function implements a Metropolis algorithm and returns results after burn-in

    Args:
        target_dist: Target distribution
        initial_state: State used to initialize the algorithm
        proposal_dist: Proposal distribution (must be symmetric)
        num_iterations: Number of iterations to run
        burn_in_proportion: Proportion of iterations used for burn-in
        seed: Seed for generating proposals

    Returns: Tuple of (post_burn_in_samples, post_burn_in_proposals) where:
        post_burn_in_samples: Results of accept/reject decision after burn-in
        post_burn_in_proposals: All proposals after burn-in
    """
    np.random.seed(seed=seed)

    current_state = initial_state
    samples = np.zeros(num_iterations + 1)
    proposals = np.zeros(num_iterations + 1)
    samples[0] = current_state
    proposals[0] = current_state

    for i in tqdm(range(num_iterations), desc="Running Metropolis algorithm"):
        # Generate a proposal
        proposed_state = current_state + proposal_dist.rvs()
        proposals[i + 1] = proposed_state

        # Calculate the acceptance ratio
        current_loglik = target_dist.logpdf(current_state)
        proposed_loglik = target_dist.logpdf(proposed_state)

        acceptance_ratio = min(1, np.exp(proposed_loglik - current_loglik))

        # Accept or reject the proposal
        if np.random.random() < acceptance_ratio:
            current_state = proposed_state

        samples[i + 1] = current_state

    burn_in = int(num_iterations * burn_in_portion)

    return samples[burn_in:], proposals[burn_in:]


def plot_distributions_and_save_figure(
    post_burn_in_samples: np.array,
    post_burn_in_proposals: np.array,
    target_dist: stats.rv_continuous,
    title: str,
    file_name: str,
    output_dir: Path = OUTPUT_DIR,
) -> None:
    """This function plots the densities of the samples and proposals against the
    true target distribution and saves the results.

    Args:
        post_burn_in_samples: Results of accept/reject decision after burn-in
        post_burn_in_proposals: All proposals after burn-in
        target_dist: Target distribution
        title: Plot title
        file_name: File name for figure
        output_dir: Output director for saving figure
    """
    x_min = min(post_burn_in_samples)
    x_max = max(post_burn_in_samples)
    x = np.linspace(x_min, x_max, 500)
    true_density = target_dist.pdf(x)

    _, axes = plt.subplots(ncols=2, figsize=(8.5, 5), sharey=True, sharex=True)
    axes[0].hist(post_burn_in_samples, bins=50, density=True)
    axes[0].plot(x, true_density, color="red")
    axes[0].set_title("Samples (result of accept/reject decision)")
    axes[0].set_ylabel("Density")
    axes[1].hist(post_burn_in_proposals, bins=50, density=True)
    axes[1].plot(x, true_density, color="red")
    axes[1].set_title("All proposals (including rejected proposals)")

    plt.suptitle(title)
    plt.tight_layout()
    plt.savefig(output_dir / file_name, dpi=600)
    plt.close()


if __name__ == "__main__":

    # Example 1: Target distribution is normal, but narrower than the proposal distribution
    target_dist = stats.norm(loc=0, scale=0.5)
    initial_state = 0
    post_burn_in_samples, post_burn_in_proposals = metropolis_algorithm(target_dist, initial_state)
    plot_distributions_and_save_figure(
        post_burn_in_samples=post_burn_in_samples,
        post_burn_in_proposals=post_burn_in_proposals,
        target_dist=target_dist,
        title=(
            "Distribution of post-burn-in samples vs proposals with true target distribution shown in red\n"
            "target = N(0, 0.25), proposal = N(0, 1)"
        ),
        file_name="narrow_normal_plots.png",
    )

    # Example 2: Target distribution is normal, but wider than the proposal distribution
    target_dist = stats.norm(loc=0, scale=2)
    initial_state = 0
    post_burn_in_samples, post_burn_in_proposals = metropolis_algorithm(target_dist, initial_state)
    plot_distributions_and_save_figure(
        post_burn_in_samples=post_burn_in_samples,
        post_burn_in_proposals=post_burn_in_proposals,
        target_dist=target_dist,
        title=(
            "Distribution of post-burn-in samples vs proposals with true target distribution shown in red\n"
            "target = N(0, 4), proposal = N(0, 1)"
        ),
        file_name="wide_normal_plots.png",
    )

    # Example 3: Target distribution is lognormal
    # NOTE on stats.lognorm parametrization: For X ~ N(μ, σ^2),
    # Y = exp(X) is lognormally distributed with s = σ, scale = exp(μ). See
    # https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.lognorm.html.
    target_dist = stats.lognorm(scale=1, s=1)
    initial_state = 1
    post_burn_in_samples, post_burn_in_proposals = metropolis_algorithm(target_dist, initial_state)
    plot_distributions_and_save_figure(
        post_burn_in_samples=post_burn_in_samples,
        post_burn_in_proposals=post_burn_in_proposals,
        target_dist=target_dist,
        title=(
            "Distribution of post-burn-in samples vs proposals with true target distribution shown in red\n"
            "target = log-normal(0, 1), proposal = N(0, 1)"
        ),
        file_name="lognormal_plots.png",
    )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants