Following the City’s 2021 migration of procurement data into a Salesforce database, we assessed the City’s social equitability in choosing business partners--the first attempt to do so in over 20 years!
- Changing the Google Sheet's column names and tab names has profound impact on Tableau's ability to pull the data. If any changes get made to the script/google sheet, you need to check on the Tableau and make sure it is okay.
- Changing the Google Sheet's tab orders has profound impact on the way these scripts writes to the Google Sheet. Do not ever change the tab orders. If you want to add a new tab in the Sheet, append it to the end.
- If you make changes to the notebooks, make sure those changes are reflected in the respective
.py
file in thesrc
folder. We created this sync to allow future maintainers and developers to have an easier time debugging the scripts via Jupyter Notebook. Feel free to not use the notebooks if you find it easier to debug merely through Python-- easier debugging was our only intended purpose for including the notebooks themselves as well.
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data <- A directory for local data.
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
│
├── conda-requirements.txt <- The requirements file for conda installs.
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ ├── features <- Scripts to turn raw data into features for modeling
│ ├── models <- Scripts to train models and then use trained models to make
│ └── visualization <- Scripts to create exploratory and results oriented visualizations
- Sign in with credentials. More details on getting started here.
- Launch a new terminal and clone repository:
git clone https://github.com/CityOfLosAngeles/REPO-NAME.git
- Change into directory:
cd REPO-NAME
- Make a new branch and start on a new task:
git checkout -b new-branch
- Start with Steps 1-2 above
- Build Docker container:
docker-compose.exe build
- Start Docker container
docker-compose.exe up
- Open Jupyter Lab notebook by typing
localhost:8888/lab/
in the browser.
-
conda create --name my_project_name
-
source activate my_project_name
-
conda install --file conda-requirements.txt -c conda-forge
-
pip install requirements.txt
Project based on the cookiecutter data science project template. #cookiecutterdatascience