From dac9c047502c016e844d997b8bf31077e3f5fd53 Mon Sep 17 00:00:00 2001 From: Raphael Shirley Date: Wed, 27 Mar 2024 22:32:43 +0100 Subject: [PATCH] Estimator runs through goldenspike notebook Still issues with params. Performance seems comparable to BPZ. Added example notebook. --- examples/LePhare_demo.ipynb | 981 ++++++++++++++++++++++++++- src/rail/estimation/algos/lephare.py | 21 +- src/rail/estimation/algos/lsst.para | 2 +- 3 files changed, 993 insertions(+), 11 deletions(-) diff --git a/examples/LePhare_demo.ipynb b/examples/LePhare_demo.ipynb index 95e0c7c..dc72fb4 100644 --- a/examples/LePhare_demo.ipynb +++ b/examples/LePhare_demo.ipynb @@ -2,17 +2,992 @@ "cells": [ { "cell_type": "markdown", + "id": "c8fb0b8d", "metadata": {}, "source": [ - "# This is a placeholder notebook" + "# Goldenspike+lephare: an example of an end-to-end analysis using RAIL with the prototype rail_lephare wrapper\n", + "\n", + "**Authors:** Sam Schmidt, Eric Charles, Alex Malz, John Franklin Crenshaw, others...\n", + "\n", + "Modified from the [original](https://github.com/LSSTDESC/rail/blob/main/examples/goldenspike_examples/goldenspike.ipynb) by Raphael Shirley to include lephare.\n", + "\n", + "**Last run successfully:** March 27, 2024" ] + }, + { + "cell_type": "markdown", + "id": "minute-lender", + "metadata": {}, + "source": [ + "This notebook is built on the main rail Goldenspike example in order to demonstrate adding in the rail_relphare wrapper.\n", + "\n", + "This notebook demonstrates how to use a the various RAIL Modules to draw synthetic samples of fluxes by color, apply physical effects to them, train photo-Z estimators on the samples, test and validate the preformance of those estimators, and to use the RAIL summarization modules to obtain n(z) estimates based on the p(z) estimates.\n", + "\n", + "**Creation** \n", + "\n", + "Note that in the parlance of the Creation Module, \"degradation\" is any post-processing that occurs to the \"true\" sample generated by the create Engine. This can include adding photometric errors, applying quality cuts, introducing systematic biases, etc. \n", + "\n", + "In this notebook, we will draw both test and training samples from a RAIL Engine object. Then we will demonstrate how to use RAIL degraders to apply effects to those samples.\n", + "\n", + "**Training and Estimation** \n", + "\n", + "The RAIL Informer modules \"train\" or \"inform\" models used to estimate p(z) given band fluxes (and potentially other information).\n", + "\n", + "The RAIL Estimation modules then use those same models to actually apply the model and extract the p(z) estimates.\n", + "\n", + "**p(z) Validation** \n", + "\n", + "The RAIL Validator module applies various metrics.\n", + "\n", + "**p(z) to n(z) Summarization** \n", + "\n", + "The RAIL Summarization modules convert per-galaxy p(z) posteriors to ensemble n(z) estimates. " + ] + }, + { + "cell_type": "markdown", + "id": "banner-migration", + "metadata": {}, + "source": [ + "## Imports" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "supreme-dietary", + "metadata": {}, + "outputs": [], + "source": [ + "# Prerquisites: os, numpy, pathlib, pzflow, tables_io\n", + "import os\n", + "import numpy as np\n", + "from pathlib import Path\n", + "from pzflow.examples import get_galaxy_data\n", + "import tables_io\n", + "import datetime" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "material-funeral", + "metadata": {}, + "outputs": [], + "source": [ + "# Various rail modules\n", + "import rail\n", + "from rail.creation.degradation.lsst_error_model import LSSTErrorModel\n", + "from rail.creation.degradation.spectroscopic_degraders import (\n", + " InvRedshiftIncompleteness,\n", + " LineConfusion,\n", + ")\n", + "from rail.creation.degradation.quantityCut import QuantityCut\n", + "from rail.creation.engines.flowEngine import FlowModeler, FlowCreator, FlowPosterior\n", + "from rail.core.data import TableHandle\n", + "from rail.core.stage import RailStage\n", + "from rail.core.util_stages import ColumnMapper, TableConverter\n", + "\n", + "from rail.estimation.algos.bpz_lite import BPZliteInformer, BPZliteEstimator\n", + "from rail.estimation.algos.k_nearneigh import KNearNeighInformer, KNearNeighEstimator\n", + "from rail.estimation.algos.flexzboost import FlexZBoostInformer, FlexZBoostEstimator\n", + "\n", + "from rail.estimation.algos.naive_stack import NaiveStackSummarizer\n", + "from rail.estimation.algos.point_est_hist import PointEstHistSummarizer\n", + "\n", + "from rail.evaluation.evaluator import Evaluator\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f2fa696d-60d9-46dc-b4c5-e6db9552cfcf", + "metadata": {}, + "outputs": [], + "source": [ + "#This folder must contain the full filters and SED templates. These are not currently in the elphare_dev package\n", + "#They can be downloaded from the official repo https://gitlab.lam.fr/Galaxies/LEPHARE\n", + "LEPHAREDIR = \"/Users/rshirley/Documents/github/LEPHARE\" #lincc/lephare\"\n", + "os.environ['LEPHAREDIR']=LEPHAREDIR #os.path.abspath(\"..\")\n", + "os.environ['LEPHAREWORK']='WORK'\n", + "import lephare as lp\n", + "from rail.estimation.algos.lephare import LephareInformer, LephareEstimator\n", + "#This is required when defining the classes locally\n", + "__file__ = os.path.join(os.getcwd(), \"goldenspike_lephare.ipynb\")" + ] + }, + { + "cell_type": "markdown", + "id": "scheduled-chamber", + "metadata": {}, + "source": [ + "RAIL now uses ceci as a back-end, which takes care of a lot of file I/O decisions to be consistent with other choices in DESC.\n", + "\n", + "The data_store commands in the cell below effectively override a ceci default to prevent overwriting previous results, generally good but not necessary for this demo.\n", + "\n", + "The `DataStore` uses `DataHandle` objects to keep track of the connections between the various stages. When one stage returns a `DataHandle` and then you pass that `DataHandle` to another stage, the underlying code can establish the connections needed to build a reproducilble pipeline. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "transparent-worship", + "metadata": {}, + "outputs": [], + "source": [ + "DS = RailStage.data_store\n", + "DS.__class__.allow_overwrite = True" + ] + }, + { + "cell_type": "markdown", + "id": "brief-institution", + "metadata": {}, + "source": [ + "Here we need a few configuration parameters to deal with differences in data schema between existing PZ codes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "finnish-southeast", + "metadata": {}, + "outputs": [], + "source": [ + "bands = [\"u\", \"g\", \"r\", \"i\", \"z\", \"y\"]\n", + "band_dict = {band: f\"mag_{band}_lsst\" for band in bands}\n", + "rename_dict = {f\"mag_{band}_lsst_err\": f\"mag_err_{band}_lsst\" for band in bands}\n" + ] + }, + { + "cell_type": "markdown", + "id": "66494399", + "metadata": {}, + "source": [ + "## Train the Flow Engine\n", + "\n", + "First we need to train the normalizing flow that will serve as the engine for the notebook.\n", + "\n", + "In the cell below, we load the example galaxy catalog from PZFlow and save it so that it can be used to train the flow. We also set the path where we will save the flow." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aaa5f61a", + "metadata": {}, + "outputs": [], + "source": [ + "DATA_DIR = Path().resolve() / \"data\"\n", + "DATA_DIR.mkdir(exist_ok=True)\n", + "\n", + "catalog_file = DATA_DIR / \"base_catalog.pq\"\n", + "catalog = get_galaxy_data().rename(band_dict, axis=1)\n", + "tables_io.write(catalog, str(catalog_file.with_suffix(\"\")), catalog_file.suffix[1:])\n", + "\n", + "catalog_file = str(catalog_file)\n", + "flow_file = str(DATA_DIR / \"trained_flow.pkl\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "0cd8b319", + "metadata": {}, + "source": [ + "Now we set the parameters for the FlowModeler, i.e. the pipeline stage that trains the flow:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "424f2893", + "metadata": {}, + "outputs": [], + "source": [ + "flow_modeler_params = {\n", + " \"name\": \"flow_modeler\",\n", + " \"input\": catalog_file,\n", + " \"model\": flow_file,\n", + " \"seed\": 0,\n", + " \"phys_cols\": {\"redshift\": [0, 3]},\n", + " \"phot_cols\": {\n", + " \"mag_u_lsst\": [17, 35],\n", + " \"mag_g_lsst\": [16, 32],\n", + " \"mag_r_lsst\": [15, 30],\n", + " \"mag_i_lsst\": [15, 30],\n", + " \"mag_z_lsst\": [14, 29],\n", + " \"mag_y_lsst\": [14, 28],\n", + " },\n", + " \"calc_colors\": {\"ref_column_name\": \"mag_i_lsst\"},\n", + "}\n" + ] + }, + { + "cell_type": "markdown", + "id": "2368b6b2", + "metadata": {}, + "source": [ + "Now we will create the flow and train it" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9a069fda", + "metadata": {}, + "outputs": [], + "source": [ + "flow_modeler = FlowModeler.make_stage(**flow_modeler_params)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "55d1b8d9", + "metadata": {}, + "outputs": [], + "source": [ + "flow_modeler.fit_model()\n" + ] + }, + { + "cell_type": "markdown", + "id": "nonprofit-interference", + "metadata": {}, + "source": [ + "## Make mock data\n", + "\n", + "Now we will use the trained flow to create training and test data for the photo-z estimators.\n", + "\n", + "For both the training and test data we will:\n", + "\n", + "1. Use the Flow to produce some synthetic data\n", + "2. Use the LSSTErrorModel to add photometric errors\n", + "3. Use the FlowPosterior to estimate the redshift posteriors for the degraded sample\n", + "4. Use the ColumnMapper to rename the error columns so that they match the names in DC2.\n", + "5. Use the TableConverter to convert the data to a numpy dictionary, which will be stored in a hdf5 file with the same schema as the DC2 data\n", + "\n", + "### Training sample\n", + "\n", + "For the training data we are going to apply a couple of extra degradation effects to the data beyond what we do to create test data, as the training data will have some spectroscopic incompleteness. This will allow us to see how the trained models perform with imperfect training data.\n", + "\n", + "More details about the degraders are available in the `rail/examples/creation_examples/degradation_demo.ipynb` notebook.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "political-member", + "metadata": {}, + "outputs": [], + "source": [ + "flow_creator_train = FlowCreator.make_stage(\n", + " name=\"flow_creator_train\",\n", + " model=flow_modeler.get_handle(\"model\"),\n", + " n_samples=50,\n", + " seed=1235,\n", + ")\n", + "\n", + "lsst_error_model_train = LSSTErrorModel.make_stage(\n", + " name=\"lsst_error_model_train\",\n", + " renameDict=band_dict,\n", + " ndFlag=np.nan,\n", + " seed=29,\n", + ")\n", + "\n", + "inv_redshift = InvRedshiftIncompleteness.make_stage(\n", + " name=\"inv_redshift\",\n", + " pivot_redshift=1.0,\n", + ")\n", + "\n", + "line_confusion = LineConfusion.make_stage(\n", + " name=\"line_confusion\",\n", + " true_wavelen=5007.0,\n", + " wrong_wavelen=3727.0,\n", + " frac_wrong=0.05,\n", + ")\n", + "\n", + "quantity_cut = QuantityCut.make_stage(\n", + " name=\"quantity_cut\",\n", + " cuts={\"mag_i_lsst\": 25.0},\n", + ")\n", + "\n", + "col_remapper_train = ColumnMapper.make_stage(\n", + " name=\"col_remapper_train\",\n", + " columns=rename_dict,\n", + ")\n", + "\n", + "table_conv_train = TableConverter.make_stage(\n", + " name=\"table_conv_train\",\n", + " output_format=\"numpyDict\",\n", + ")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "simple-bundle", + "metadata": {}, + "outputs": [], + "source": [ + "train_data_orig = flow_creator_train.sample(150, 1235)\n", + "train_data_errs = lsst_error_model_train(train_data_orig, seed=66)\n", + "train_data_inc = inv_redshift(train_data_errs)\n", + "train_data_conf = line_confusion(train_data_inc)\n", + "train_data_cut = quantity_cut(train_data_conf)\n", + "train_data_pq = col_remapper_train(train_data_cut)\n", + "train_data = table_conv_train(train_data_pq)\n" + ] + }, + { + "cell_type": "markdown", + "id": "above-portable", + "metadata": {}, + "source": [ + "Let's examine the quantities that we've generated, we'll use the handy `tables_io` package to temporarily write to a pandas dataframe for quick writeout of the columns:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "functional-dollar", + "metadata": {}, + "outputs": [], + "source": [ + "train_table = tables_io.convertObj(train_data.data, tables_io.types.PD_DATAFRAME)\n", + "train_table.head()\n" + ] + }, + { + "cell_type": "markdown", + "id": "clinical-pavilion", + "metadata": {}, + "source": [ + "You see that we've generated redshifts, ugrizy magnitudes, and magnitude errors with names that match those in the cosmoDC2_v1.1.4_image data." + ] + }, + { + "cell_type": "markdown", + "id": "square-breeding", + "metadata": {}, + "source": [ + "### Testing sample\n", + "\n", + "For the test sample we will:\n", + "\n", + "1. Use the Flow to produce some synthetic data\n", + "2. Use the LSSTErrorModel to smear the data\n", + "3. Use the FlowPosterior to estimate the redshift posteriors for the degraded sample\n", + "4. Use ColumnMapper to rename some of the columns to match DC2\n", + "5. Use the TableConverter to convert the data to a numpy dictionary, which will be stored in a hdf5 file with the same schema as the DC2 data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "breathing-deficit", + "metadata": {}, + "outputs": [], + "source": [ + "flow_creator_test = FlowCreator.make_stage(\n", + " name=\"flow_creator_test\",\n", + " model=flow_modeler.get_handle(\"model\"),\n", + " n_samples=50,\n", + ")\n", + "\n", + "lsst_error_model_test = LSSTErrorModel.make_stage(\n", + " name=\"lsst_error_model_test\",\n", + " renameDict=band_dict,\n", + " ndFlag=np.nan,\n", + ")\n", + "\n", + "flow_post_test = FlowPosterior.make_stage(\n", + " name=\"flow_post_test\",\n", + " model=flow_modeler.get_handle(\"model\"),\n", + " column=\"redshift\",\n", + " grid=np.linspace(0.0, 5.0, 21),\n", + ")\n", + "\n", + "col_remapper_test = ColumnMapper.make_stage(\n", + " name=\"col_remapper_test\",\n", + " columns=rename_dict,\n", + " hdf5_groupname=\"\",\n", + ")\n", + "\n", + "table_conv_test = TableConverter.make_stage(\n", + " name=\"table_conv_test\",\n", + " output_format=\"numpyDict\",\n", + ")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "educational-windows", + "metadata": {}, + "outputs": [], + "source": [ + "test_data_orig = flow_creator_test.sample(150, 1234)\n", + "test_data_errs = lsst_error_model_test(test_data_orig, seed=58)\n", + "test_data_post = flow_post_test.get_posterior(test_data_errs, err_samples=None)\n", + "test_data_pq = col_remapper_test(test_data_errs)\n", + "test_data = table_conv_test(test_data_pq)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "inclusive-effect", + "metadata": {}, + "outputs": [], + "source": [ + "test_table = tables_io.convertObj(test_data.data, tables_io.types.PD_DATAFRAME)\n", + "test_table.head()\n" + ] + }, + { + "cell_type": "markdown", + "id": "formal-camping", + "metadata": {}, + "source": [ + "## \"Inform\" some estimators\n", + "\n", + "More details about the process of \"informing\" or \"training\" the models used by the estimators is available in the `rail/examples/estimation_examples/RAIL_estimation_demo.ipynb` notebook.\n", + "\n", + "We use \"inform\" rather than \"train\" to generically refer to the preprocessing of any prior information.\n", + "For a machine learning estimator, that prior information is a training set, but it can also be an SED template library for a template-fitting or hybrid estimator." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "accredited-circle", + "metadata": {}, + "outputs": [], + "source": [ + "inform_bpz = BPZliteInformer.make_stage(\n", + " name=\"inform_bpz\",\n", + " nondetect_val=np.nan,\n", + " model=\"bpz.pkl\",\n", + " hdf5_groupname=\"\",\n", + ")\n", + "\n", + "inform_knn = KNearNeighInformer.make_stage(\n", + " name=\"inform_knn\",\n", + " nondetect_val=np.nan,\n", + " model=\"knnpz.pkl\",\n", + " hdf5_groupname=\"\",\n", + ")\n", + "\n", + "inform_fzboost = FlexZBoostInformer.make_stage(\n", + " name=\"inform_FZBoost\",\n", + " nondetect_val=np.nan,\n", + " model=\"fzboost.pkl\",\n", + " hdf5_groupname=\"\",\n", + ")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6b86573b-37c4-453c-a400-0dc610bdf24f", + "metadata": {}, + "outputs": [], + "source": [ + "inform_lephare = LephareInformer.make_stage(\n", + " name=\"inform_Lephare\",\n", + " nondetect_val=np.nan,\n", + " model=\"lephare.pkl\",\n", + " hdf5_groupname=\"\",\n", + " #QSO_dict={'typ':'QSO',...},\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e9fcd395-b5ea-4f9b-97b8-acf25e55662e", + "metadata": {}, + "outputs": [], + "source": [ + "train_data_errs.data.keys()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "great-verification", + "metadata": {}, + "outputs": [], + "source": [ + "inform_bpz.inform(train_data)\n", + "inform_knn.inform(train_data)\n", + "inform_fzboost.inform(train_data)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a5be863a-3b07-47fa-a226-777dde26dc3d", + "metadata": {}, + "outputs": [], + "source": [ + "len(train_data.data['redshift'])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "42c13744-c2f8-4702-829e-850e13f2c7a6", + "metadata": {}, + "outputs": [], + "source": [ + "inform_lephare.inform(train_data)" + ] + }, + { + "cell_type": "markdown", + "id": "colonial-trailer", + "metadata": {}, + "source": [ + "## Estimate photo-z posteriors\n", + "\n", + "More detail on the specific estimators used here is available in the `rail/examples/estimation_examples/RAIL_estimation_demo.ipynb` notebook, but here is a very brief summary of the three estimators used in this notebook:\n", + "\n", + "`BPZliteEstimator` is a template-based photo-z code that outputs the posterior estimated given likelihoods calculated using a template set combined with a Bayesian prior. See Benitez (2000) for more details.
\n", + "`KNearNeighEstimator` is a simple photo-z code that finds the K nearest neighbor training galaxies in color/magnitude space and creates a weighted (by distance) mixture model PDF based on the redshifts of those K neighbors.
\n", + "`FlexZBoostEstimator` is a mature photo-z algorithm that estimates a PDF for each galaxy via a conditional density estimate using the training data. See [Izbicki & Lee (2017)](https://doi.org/10.1214/17-EJS1302) for more details.
\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "electric-minute", + "metadata": {}, + "outputs": [], + "source": [ + "estimate_bpz = BPZliteEstimator.make_stage(\n", + " name=\"estimate_bpz\",\n", + " hdf5_groupname=\"\",\n", + " nondetect_val=np.nan,\n", + " model=inform_bpz.get_handle(\"model\"),\n", + ")\n", + "\n", + "estimate_knn = KNearNeighEstimator.make_stage(\n", + " name=\"estimate_knn\",\n", + " hdf5_groupname=\"\",\n", + " nondetect_val=np.nan,\n", + " model=inform_knn.get_handle(\"model\"),\n", + ")\n", + "\n", + "estimate_fzboost = FlexZBoostEstimator.make_stage(\n", + " name=\"test_FZBoost\",\n", + " nondetect_val=np.nan,\n", + " model=inform_fzboost.get_handle(\"model\"),\n", + " hdf5_groupname=\"\",\n", + " aliases=dict(input=\"test_data\", output=\"fzboost_estim\"),\n", + ")\n", + "\n", + "# estimate_lephare = LephareEstimator.make_stage(\n", + "# name=\"test_Lephare\",\n", + "# nondetect_val=np.nan,\n", + "# model=inform_lephare.get_handle(\"model\"),\n", + "# hdf5_groupname=\"\",\n", + "# aliases=dict(input=\"test_data\", output=\"lephare_estim\"),\n", + "# )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "average-rental", + "metadata": {}, + "outputs": [], + "source": [ + "knn_estimated = estimate_knn.estimate(test_data)\n", + "fzboost_estimated = estimate_fzboost.estimate(test_data)\n", + "bpz_estimated = estimate_bpz.estimate(test_data)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c500f024-2d4b-4cbe-ab07-fac0d69cad54", + "metadata": {}, + "outputs": [], + "source": [ + "estimate_lephare = LephareEstimator.make_stage(\n", + " name=\"test_Lephare\",\n", + " nondetect_val=np.nan,\n", + " model=inform_lephare.get_handle(\"model\"),\n", + " hdf5_groupname=\"\",\n", + " aliases=dict(input=\"test_data\", output=\"lephare_estim\"),\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dccd6f85-40bc-4385-9739-18b8e9db9b41", + "metadata": {}, + "outputs": [], + "source": [ + "lephare_estimated = estimate_lephare.estimate(test_data)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "77a3e4d1-9dba-47c2-9ad4-277ec06b924c", + "metadata": {}, + "outputs": [], + "source": [ + "os.path.dirname(os.path.abspath(__file__))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "920be48d-9c01-46fb-ba8a-593394c48691", + "metadata": {}, + "outputs": [], + "source": [ + "396\n", + "\n", + "os.path.abspath(os.path.join(os.path.dirname( __file__ ), '..', '..','..'))" + ] + }, + { + "cell_type": "markdown", + "id": "right-mystery", + "metadata": {}, + "source": [ + "## Evaluate the estimates\n", + "\n", + "Now we evaluate metrics on the estimates, separately for each estimator. \n", + "\n", + "Each call to the `Evaluator.evaluate` will create a table with the various performance metrics. \n", + "We will store all of these tables in a dictionary, keyed by the name of the estimator." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "portuguese-graduate", + "metadata": {}, + "outputs": [], + "source": [ + "eval_dict = dict(\n", + " bpz=bpz_estimated, fzboost=fzboost_estimated, \n", + " knn=knn_estimated, lephare=lephare_estimated)\n", + "truth = test_data_orig\n", + "\n", + "result_dict = {}\n", + "for key, val in eval_dict.items():\n", + " the_eval = Evaluator.make_stage(name=f\"{key}_eval\", truth=truth)\n", + " result_dict[key] = the_eval.evaluate(val, truth)\n" + ] + }, + { + "cell_type": "markdown", + "id": "danish-miller", + "metadata": {}, + "source": [ + "The Pandas DataFrame output format conveniently makes human-readable printouts of the metrics. \n", + "This next cell will convert everything to Pandas." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "constant-peripheral", + "metadata": {}, + "outputs": [], + "source": [ + "results_tables = {\n", + " key: tables_io.convertObj(val.data, tables_io.types.PD_DATAFRAME)\n", + " for key, val in result_dict.items()\n", + "}\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "expanded-fellowship", + "metadata": {}, + "outputs": [], + "source": [ + "results_tables[\"knn\"]\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "normal-alexandria", + "metadata": {}, + "outputs": [], + "source": [ + "results_tables[\"fzboost\"]\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "satisfied-intelligence", + "metadata": {}, + "outputs": [], + "source": [ + "results_tables[\"bpz\"]\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "baf9d2ee-8dba-455c-9daf-bc9c6c0cd976", + "metadata": {}, + "outputs": [], + "source": [ + "results_tables[\"lephare\"]" + ] + }, + { + "cell_type": "markdown", + "id": "grave-speaking", + "metadata": {}, + "source": [ + "## Summarize the per-galaxy redshift constraints to make population-level distributions\n", + "\n", + "{introduce the summarizers}\n", + "\n", + "First we make the stages, then execute them, then plot the output." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "white-replacement", + "metadata": {}, + "outputs": [], + "source": [ + "point_estimate_test = PointEstHistSummarizer.make_stage(name=\"point_estimate_test\")\n", + "naive_stack_test = NaiveStackSummarizer.make_stage(name=\"naive_stack_test\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "korean-guess", + "metadata": {}, + "outputs": [], + "source": [ + "point_estimate_ens = point_estimate_test.summarize(eval_dict[\"lephare\"])\n", + "naive_stack_ens = naive_stack_test.summarize(eval_dict[\"lephare\"])\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "focused-puppy", + "metadata": {}, + "outputs": [], + "source": [ + "_ = naive_stack_ens.data.plot_native(xlim=(0, 3))\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "worthy-croatia", + "metadata": {}, + "outputs": [], + "source": [ + "_ = point_estimate_ens.data.plot_native(xlim=(0, 3))\n" + ] + }, + { + "cell_type": "markdown", + "id": "medical-preview", + "metadata": {}, + "source": [ + "## Convert this to a `ceci` Pipeline\n", + "\n", + "Now that we have all these stages defined and configured, and that we have established the connections between them by passing `DataHandle` objects between them, we can build a `ceci` Pipeline.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "neural-central", + "metadata": {}, + "outputs": [], + "source": [ + "import ceci\n", + "\n", + "pipe = ceci.Pipeline.interactive()\n", + "stages = [\n", + " # train the flow\n", + " flow_modeler,\n", + " # create the training catalog\n", + " flow_creator_train,\n", + " lsst_error_model_train,\n", + " inv_redshift,\n", + " line_confusion,\n", + " quantity_cut,\n", + " col_remapper_train,\n", + " table_conv_train,\n", + " # create the test catalog\n", + " flow_creator_test,\n", + " lsst_error_model_test,\n", + " col_remapper_test,\n", + " table_conv_test,\n", + " # inform the estimators\n", + " inform_bpz,\n", + " inform_knn,\n", + " inform_fzboost,\n", + " inform_lephare,\n", + " # estimate posteriors\n", + " estimate_bpz,\n", + " estimate_knn,\n", + " estimate_fzboost,\n", + " estimate_lephare,\n", + " # estimate n(z), aka \"summarize\"\n", + " point_estimate_test,\n", + " naive_stack_test,\n", + "]\n", + "for stage in stages:\n", + " pipe.add_stage(stage)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "packed-chosen", + "metadata": {}, + "outputs": [], + "source": [ + "pipe.initialize(\n", + " dict(input=catalog_file), dict(output_dir=\".\", log_dir=\".\", resume=False), None\n", + ")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "academic-romance", + "metadata": {}, + "outputs": [], + "source": [ + "pipe.save(\"tmp_goldenspike.yml\")\n" + ] + }, + { + "cell_type": "markdown", + "id": "younger-testament", + "metadata": {}, + "source": [ + "### Read back the pipeline and run it" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "israeli-workshop", + "metadata": {}, + "outputs": [], + "source": [ + "pr = ceci.Pipeline.read(\"tmp_goldenspike.yml\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "danish-homeless", + "metadata": {}, + "outputs": [], + "source": [ + "pr.run()\n" + ] + }, + { + "cell_type": "markdown", + "id": "informational-performer", + "metadata": {}, + "source": [ + "## Clean up:\n", + "\n", + "Finally, you'll notice that we've written a large number of temporary files in the course of running this demo, to delete these and clean up the directory just run the `cleanup.sh` script in this directory to delete the data files." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "racial-stocks", + "metadata": {}, + "outputs": [], + "source": [ + "# TODO fix and add clean up scripts\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ec229a5d-18c1-4216-b7c6-f6a26095ab6a", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5d09f4d3-c937-4998-8a0b-22d806a6a32d", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "668f8e6b-0409-4abf-8c7e-b1a4bb6db0c3", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c85107c8-b669-438d-8fa7-a7bc817ff274", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0f380616-37a9-473e-9dce-8a95c7672fcd", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { + "kernelspec": { + "display_name": "lincc", + "language": "python", + "name": "lincc" + }, "language_info": { - "name": "python" + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 5 } diff --git a/src/rail/estimation/algos/lephare.py b/src/rail/estimation/algos/lephare.py index 4d0400c..de7d8bc 100644 --- a/src/rail/estimation/algos/lephare.py +++ b/src/rail/estimation/algos/lephare.py @@ -30,7 +30,9 @@ def __init__(self, args, comm=None): """Init function, init config stuff (COPIED from rail_bpz)""" CatInformer.__init__(self, args, comm=comm) # Default local parameters - self.config_file = "lsst.para" + self.config_file = "{}/{}".format( + os.path.dirname(os.path.abspath(__file__)), "lsst.para" + ) self.lephare_config = lp.read_config(self.config_file) def _set_config(self, lephare_config): @@ -147,14 +149,18 @@ class LephareEstimator(CatEstimator): def __init__(self, args, comm=None): CatEstimator.__init__(self, args, comm=comm) + # Default local parameters - self.config_file = "/Users/rshirley/Documents/github/lincc/rail_lephare/src/rail/estimation/algos/lsst.para" + self.config_file = "{}/{}".format( + os.path.dirname(os.path.abspath(__file__)), "lsst.para" + ) self.lephare_config = lp.read_config(self.config_file) self.photz = lp.PhotoZ(self.lephare_config) + print("init") - def open_model(self, **kwargs): - CatEstimator.open_model(self, **kwargs) - self.modeldict = self.model + # def open_model(self, **kwargs): + # CatEstimator.open_model(self, **kwargs) + # self.modeldict = self.model def _estimate_pdf(self, onesource): """Return the pdf of a single source. @@ -162,7 +168,7 @@ def _estimate_pdf(self, onesource): Do we want to resample on RAIL z grid? """ # Check this is the best way to access pdf - pdf = onesource.pdfmap[len(onesource.pdfmap) - 1] + pdf = onesource.pdfmap[11] # 11 = Bayesian galaxy redshift # return the PDF as an array alongside lephare native zgrid return np.array(pdf.vPDF), np.array(pdf.xaxis) @@ -200,6 +206,7 @@ def _process_chunk(self, start, end, data, first): a0, a1 = self.photz.run_autoadapt(srclist) offsets = ",".join(np.array(a0).astype(str)) offsets = "# Offsets from auto-adapt: " + offsets + "\n" + print(offsets) photozlist = [] for i in range(ng): @@ -221,5 +228,5 @@ def _process_chunk(self, start, end, data, first): zmean[i] = (zgrid * pdfs[i]).sum() / pdfs[i].sum() qp_dstn = qp.Ensemble(qp.interp, data=dict(xvals=zgrid, yvals=np.array(pdfs))) - + qp_dstn.set_ancil(dict(zmode=zmode, zmean=zmean)) self._do_chunk_output(qp_dstn, start, end, first) diff --git a/src/rail/estimation/algos/lsst.para b/src/rail/estimation/algos/lsst.para index dc20567..85ff634 100644 --- a/src/rail/estimation/algos/lsst.para +++ b/src/rail/estimation/algos/lsst.para @@ -29,7 +29,7 @@ AGE_RANGE 0.,15.e9 # Age Min-Max in yr # FILTER_REP $LEPHAREDIR/filt # Repository in which the filters are stored -FILTER_LIST lsst/filter_u.dat,lsst/filter_g.dat,lsst/filter_r.dat,lsst/filter_i.dat,lsst/filter_z.dat,lsst/filter_y.dat +FILTER_LIST lsst/total_u.pb,lsst/total_g.pb,lsst/total_r.pb,lsst/total_i.pb,lsst/total_z.pb,lsst/total_y3.pb TRANS_TYPE 1 # TRANSMISSION TYPE # 0[-def]: Energy, 1: Nb of photons FILTER_CALIB 0,0,0,0,0,0 # 0[-def]: fnu=ctt