Skip to content

Commit 6ada5a0

Browse files
authored
Update README.md
1 parent 49c001c commit 6ada5a0

File tree

1 file changed

+74
-192
lines changed

1 file changed

+74
-192
lines changed

README.md

+74-192
Original file line numberDiff line numberDiff line change
@@ -1,193 +1,75 @@
1-
# LEAP Template Feedstock
2-
This repository serves as template/documentation/testing ground for Leap-Pangeo Data Library Feedstocks.
3-
4-
## Setup
5-
Every dataset that is part of the LEAP-Pangeo Data Library is represented by a repository. You can easily create one by following the instructions below.
6-
7-
### Use this template
8-
- Click on the button on the top left to use this repository as a template for your new feedstock
9-
<img width="749" alt="image" src="https://github.com/leap-stc/proto_feedstock/assets/14314623/c786b2c7-adf1-4d4c-9811-0c7a1aa9228c">
10-
11-
>[!IMPORTANT]
12-
> - Make the repo public
13-
> - Make sure to create the repo under the `leap-stc` github organization, not your personal account!
14-
> - Name your feedstock according to your data `<your_data>_feedstock`.
15-
> - Optional but encouraged: Give the `leap-stc/data-and-compute` team admin access (so we can make changes without bothering you). To do so go to `Settings > Collaborators and Teams > Add Teams` and add `leap-stc/data-and-compute` with admin role.
16-
>
17-
> If you made a mistake here it is not a huge problem. All these settings can be changed after you created the repo.
18-
19-
- Now you can locally check out the repository.
20-
21-
> [!NOTE]
22-
> The instructions below are specific for testing recipes locally but downloading and producing data on GCS cloud buckets. If you are running the recipes locally you have to minimally modify some of the steps as noted below.
23-
24-
### Are you linking or ingesting data?
25-
If the data you want to work with is already available as ARCO format in a publically accessible cloud bucket, you can simply link it and add it to the LEAP catalog.
26-
27-
If you want to transform your dataset from e.g. a bunch of netcdf files into a zarr store, you have to build a Pangeo-Forge recipe.
28-
29-
<details>
30-
<summary>
31-
32-
#### Linking existing ARCO datasets
33-
34-
</summary>
35-
36-
To link an existing dataset all you need to do is to modify `'feedstock/meta.yaml'` and `'feedstock/catalog.yaml'`. Enter the information about the dataset in `'feedstock/meta.yaml'` and then add corresponding entries (the `'id'` parameter has to match) in `'feedstock/catalog.yaml'`, where the url can point to any publically available cloud storage.
37-
38-
<details>
39-
<summary> Example from the [`arco-era5_feedstock](https://github.com/leap-stc/arco-era5_feedstock): </summary>
40-
41-
`meta.yaml`
42-
43-
```
44-
title: "ARCO ERA5"
45-
description: >
46-
Analysis-Ready, Cloud Optimized ERA5 data ingested by Google Research
47-
recipes:
48-
- id: "0_25_deg_pressure_surface_levels"
49-
- id: "0_25_deg_model_levels"
50-
provenance:
51-
providers:
52-
- name: "Google Research"
53-
description: >
54-
Hersbach, H., Bell, B., Berrisford, P., Hirahara, S., Horányi, A.,
55-
Muñoz‐Sabater, J., Nicolas, J., Peubey, C., Radu, R., Schepers, D.,
56-
Simmons, A., Soci, C., Abdalla, S., Abellan, X., Balsamo, G.,
57-
Bechtold, P., Biavati, G., Bidlot, J., Bonavita, M., De Chiara, G.,
58-
Dahlgren, P., Dee, D., Diamantakis, M., Dragani, R., Flemming, J.,
59-
Forbes, R., Fuentes, M., Geer, A., Haimberger, L., Healy, S.,
60-
Hogan, R.J., Hólm, E., Janisková, M., Keeley, S., Laloyaux, P.,
61-
Lopez, P., Lupu, C., Radnoti, G., de Rosnay, P., Rozum, I., Vamborg, F.,
62-
Villaume, S., Thépaut, J-N. (2017): Complete ERA5: Fifth generation of
63-
ECMWF atmospheric reanalyses of the global climate. Copernicus Climate
64-
Change Service (C3S) Data Store (CDS).
65-
66-
Hersbach et al, (2017) was downloaded from the Copernicus Climate Change
67-
Service (C3S) Climate Data Store. We thank C3S for allowing us to
68-
redistribute the data.
69-
70-
The results contain modified Copernicus Climate Change Service
71-
information 2022. Neither the European Commission nor ECMWF is
72-
responsible for any use that may be made of the Copernicus information
73-
or data it contains.
74-
roles:
75-
- producer
76-
- licensor
77-
license: "Apache Version 2.0"
78-
maintainers:
79-
- name: "Julius Busecke"
80-
orcid: "0000-0001-8571-865X"
81-
github: jbusecke
82-
```
83-
84-
`catalog.yaml`
85-
86-
```
87-
# All the information important to cataloging.
88-
"ncviewjs:meta_yaml_url": "https://github.com/leap-stc/arco-era5_feedstock/blob/main/feedstock/meta.yaml" # !!! Make sure to change this to YOUR feedstock!!!
89-
tags:
90-
- atmosphere
91-
- reanalysis
92-
- zarr
93-
stores:
94-
- id: "0_25_deg_pressure_surface_levels"
95-
name: "This dataset contains most pressure-level fields and all surface-level field regridded to a uniform 0.25° resolution. It is a superset of the data used to train GraphCast and NeuralGCM"
96-
url: "gs://gcp-public-data-arco-era5/ar/full_37-1h-0p25deg-chunk-1.zarr-v3"
1+
# Urban Heat Island (UHI) Detection Using LiDAR and Thermal Data
2+
This project aims to identify Urban Heat Island (UHI) effects through predictive modeling, leveraging the NYC4Urban multimodal dataset. The study utilizes LiDAR-derived features and thermal layers to predict temperature distributions, focusing on mean and standard deviation values.
3+
4+
# Project Overview
5+
Urban Heat Islands are regions with significantly higher temperatures than their surroundings, contributing to challenges such as:
6+
7+
Increased energy consumption
8+
Elevated pollution levels
9+
Heat-related illnesses
10+
By identifying UHI regions, urban planners can implement mitigation strategies such as:
11+
12+
Cooling techniques like green spaces
13+
Public health preparedness
14+
Heat-resilient infrastructure
15+
16+
# Dataset
17+
The project uses the NYC4Urban dataset, comprising:
18+
19+
LiDAR Layers (9-20): Includes return count, elevation, and reflectance statistics.
20+
Thermal Layers (23-25): Temperature measurements from Landsat-8. Layers 26 and 27 were excluded due to data quality issues.
21+
22+
# Preprocessing
23+
Missing data in LiDAR layers was handled using cv2.inpaint for spatial continuity.
24+
Resolution mismatch between LiDAR (0.3m) and Thermal (100m) layers required reformulation of the prediction task.
25+
26+
# Approach
27+
## Model Architecture
28+
A CNN-based regression model predicts the mean and standard deviation of temperature from LiDAR layers. Key components include:
29+
30+
Convolutional Layers: Extract spatial features with 16–64 filters.
31+
Batch Normalization: Standardizes inputs and accelerates convergence.
32+
Global Average Pooling: Reduces dimensionality before dense layers.
33+
Dense Layers: Two fully connected layers predict the final temperature statistics.
34+
35+
## Training Details
36+
Loss Function: Mean Squared Error (MSE)
37+
Optimizer: Adam
38+
Batch Size: 16
39+
Validation Split: 20%
40+
Early Stopping: Stops training after 10 epochs of no improvement in validation loss.
41+
Checkpoints: Saves the best-performing model.
42+
43+
# Results
44+
Correlation Analysis
45+
LiDAR layers with the highest correlation to temperature:
46+
47+
Elevation (Mean)
48+
Elevation (Max)
49+
Elevation (Min)
50+
These layers capture information strongly related to surface temperature patterns.
51+
52+
# Model Performance
53+
The model successfully predicted:
54+
55+
Mean Temperature: Example - Original: 29.67°C, Predicted: 28.63°C
56+
Standard Deviation: Example - Original: 1.21°C, Predicted: 1.31°C
57+
58+
# Visualizations
59+
Example Outputs
60+
LiDAR Elevation Mean: Input visualization of LiDAR data.
61+
Thermal Maps: Corresponding thermal layers with derived mean and standard deviation values.
62+
Correlation Plots: Relationship between LiDAR features and thermal statistics.
63+
Model Prediction Plots: Comparison of predicted vs. original mean and standard deviation values.
64+
65+
# Challenges
66+
Missing Data: Resolved using interpolation.
67+
Resolution Mismatch: Refocused prediction task from pixelwise temperature mapping to statistical pattern prediction.
68+
Compute Limitations: Restricted training dataset size to avoid crashes.
69+
70+
# Future Steps
71+
Augmented Training Dataset: Incorporate more samples to improve generalization.
72+
Increased Model Complexity: Experiment with deeper architectures or ensemble models.
73+
Hyperparameter Tuning: Optimize learning rate, batch size, and other parameters.
74+
Real-World Applications: Integrate the model with urban planning frameworks for actionable insights.
9775

98-
- id: "0_25_deg_model_levels"
99-
name: "This dataset contains 3D fields at 0.25° resolution with ERA5's native vertical coordinates (hybrid pressure/sigma coordinates)."
100-
url: "'gs://gcp-public-data-arco-era5/ar/model-level-1h-0p25deg.zarr-v1'"
101-
```
102-
103-
</details>
104-
105-
</details>
106-
107-
<details>
108-
<summary>
109-
110-
#### Build a Pangeo-Forge Recipe
111-
112-
</summary>
113-
114-
##### Build and test your recipe locally on the LEAP-Pangeo Jupyterhub
115-
116-
- Edit the `feedstock/recipe.py` to build your pangeo-forge recipe. If you are new to pangeo-forge, [the docs](https://pangeo-forge.readthedocs.io/en/latest/composition/index.html#overview) are a great starting point
117-
- Make sure to also edit the other files in the `/feedstock/` directory. More info on feedstock structure can be found [here](https://pangeo-forge.readthedocs.io/en/latest/deployment/feedstocks.html#meta-yaml)
118-
- 🚨 You should not have to modify any of the files outside the `feedstock` folder (and this README)! If you run into a situation where you think changes are needed, please open an issue and tag @leap-stc/data-and-compute.
119-
120-
#### Test your recipe locally
121-
Before we run your recipe on LEAPs Dataflow runner you should test your recipe locally.
122-
123-
You can do that on the LEAP-Pangeo Jupyterhub or your own computer.
124-
125-
1. Set up an environment with mamba or conda:
126-
```shell
127-
mamba create -n runner0102 python=3.11 -y
128-
conda activate runner0102
129-
pip install pangeo-forge-runner==0.10.2 --no-cache-dir
130-
```
131-
132-
2. You can now use [pangeo-forge-runner](https://github.com/pangeo-forge/pangeo-forge-runner) from the root directory of a checked out version of this repository in the shell
133-
134-
```shell
135-
pangeo-forge-runner bake \
136-
--repo=./ \
137-
--Bake.recipe_id=<recipe_id>\
138-
-f configs/config_local_hub.py
139-
```
140-
>[!NOTE]
141-
> Make sure to replace the `'recipe_id'` with the one defined in your `feedstock/meta.yaml` file.
142-
>
143-
>If you created multiple recipes you have to run a call like above for each one.
144-
145-
> To run this fully local (e.g. on your laptop) you have to replace `config_local_hub.py` with `config_local.py`.
146-
>
147-
> ⚠️ This will save the cache and output to a subfolder of the location you are executing this from.. Make sure do delete them once you are done with testing.
148-
149-
3. Check the output! If something looks off edit your recipe.
150-
151-
>[!TIP]
152-
>The above command will by default 'prune' the recipe, meaning it will only use two of the input files you provided to avoid creating too large output.
153-
>Keep that in mind when you check the output for correctness.
154-
155-
Once you are happy with the output it is time to commit your work to git, push to github and get this recipe set up for ingestion using [Google Dataflow](https://cloud.google.com/dataflow?hl=en)
156-
157-
#### Activate the linting CI and clean up your repo
158-
[Pre-Commit](https://pre-commit.com) linting is already pre-configured in this repository. To run the checks locally simply do:
159-
```shell
160-
pip install pre-commit
161-
pre-commit install
162-
pre-commit run --all-files
163-
```
164-
Then create a new branch and add those fixes (and others that were not able to auto-fix). From now on pre-commit will run checks after every commit.
165-
166-
Alternatively (or additionally) you can use the [pre-commit CI Github App](https://results.pre-commit.ci/) to run these checks as part of every PR.
167-
To proceed with this step you will need assistance a memeber of the [LEAP Data and Computation Team](https://leap-stc.github.io/support.html#data-and-computation-team). Please open an issue on this repository and tag `@leap-stc/data-and-compute` and ask for this repository to be added to the pre-commit.ci app.
168-
169-
#### Deploy your recipe to LEAPs Google Dataflow
170-
171-
172-
>[!WARNING]
173-
>To proceed with this step you will need to have certain repository secrets set up. For security reasons this should be done by a memeber of the [LEAP Data and Computation Team](https://leap-stc.github.io/support.html#data-and-computation-team). Please open an issue on this repository and tag `@leap-stc/data-and-compute` to get assistance.
174-
175-
- To deploy a recipe to Google Dataflow you have to trigger the "Deploy Recipes to Google Dataflow" with a single `recipe_id` as input and choose the appropriate branch.
176-
>[!WARNING]
177-
>We recently ran into problems with PRs based on forked repositories ([example](https://github.com/leap-stc/eNATL_feedstock/pull/8)), which cannot be run via the Actions "Run Workflow" button/trigger. Until we find a workable solution here we recommend to not fork the feedstock repo and work only with branches on the main feedstock
178-
179-
- Once your recipe is run from a github workflow we assume that it is deployed to Google Dataflow and activate the final [copy stage](https://github.com/leap-stc/LEAP_template_feedstock/blob/55ee23ce0bc90f764d18bc34c58adccb5b38fc89/feedstock/recipe.py#L63). This happens automatically, but you have to make sure to edit the `'feedstock/catalog.yaml'` `url` entries for each `recipe_id`. This location will be the 'final' location of the data, and this is what gets passed to the the catalog in the next step!
180-
181-
182-
>[!NOTE]
183-
>By default the `'prune'` option is set to true. To build the final dataset you need to change that value [here](https://github.com/leap-stc/LEAP_template_feedstock/blob/55ee23ce0bc90f764d18bc34c58adccb5b38fc89/configs/config_dataflow.py#L7). **Particularly for large datasets make sure that you have finalized the entries in `'feedstock/catalog.yaml'`**, since the full build of the dataset can be slow and expensive - you want to avoid doing that again 😁
184-
185-
</details>
186-
187-
### Add your dataset to the LEAP-Pangeo Catalog
188-
Now that your awesome dataset is available as an ARCO zarr store, you should make sure that everyone else at LEAP can check this dataset out easily.
189-
Open a PR to our [catalog input file](https://github.com/leap-stc/data-management/blob/main/catalog/input.yaml) and add a link to this repos `'catalog.yaml'` there. See [here](https://github.com/leap-stc/data-management/pull/132) for an example PR for the [`arco-era5_feedstock](https://github.com/leap-stc/arco-era5_feedstock).
190-
191-
### Clean up
192-
193-
- [ ] Replace the instructions in this README.

0 commit comments

Comments
 (0)