Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bnb/dev #186

Merged
merged 18 commits into from
Feb 16, 2024
Merged

Bnb/dev #186

merged 18 commits into from
Feb 16, 2024

Conversation

bnb32
Copy link
Collaborator

@bnb32 bnb32 commented Feb 14, 2024

While we're adding loss functions...

new_ds[var].standard_name = standard_name
if hasattr(old_var, 'long_name'):
new_ds[var].long_name = old_var.long_name
for var in keep_vars:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this semantic naming much better than working with unnamed list comp's. Nice!

@@ -1421,358 +1418,3 @@ def get_next(self, temporal_weights=None, spatial_weights=None):
temporal_weights=temporal_weights, spatial_weights=spatial_weights)
observation = self.data[self.current_obs_index]
return observation

@classmethod
def get_file_times(cls, file_paths, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where did all these methods go?? I dont see them moved in this PR?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these were redundant since they were being overridden in nc_data_handling.py and only being used there

x1_div = tf.stack(
[self._compute_md(x1, fidx=i) for i in range(2 * hub_heights)])
x2_div = tf.stack(
[self._compute_md(x2, fidx=i) for i in range(2 * hub_heights)])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be range(0, 2*hh, 2)? Right now you're iterating from 0 to 2*hh and when i=1 I think that self._compute_md() will use i=(1,2) as u10m/v10m but i=(1,2) is really v10m/u100m

Actually i don't think thats quite right because uidx = 2 * (fidx // 2) but still, won't you be double calculating if you do i in range(2*hh)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep! good catch. looks like I was essentially weighting this by a factor 2 by accident

(n_observations, spatial_1, spatial_2, temporal, features)
x2 : tf.tensor
high resolution data
(n_observations, spatial_1, spatial_2, temporal, features)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

somewhere you should assert len(shape) == 5

https://en.wikipedia.org/wiki/Material_derivative
"""

MAE_LOSS = MeanAbsoluteError()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some of these loss functions we're still assuming MAE is the go-to metric but it is not always. Consider making the class attribute simply LOSS_METRIC or ERROR_METRIC. Personally I'm 100% fine with manipulating the class attributes at runtime but the name is presumptious.

elif axis == 3:
return tf.concat([x[..., 1:2] - x[..., 0:1],
(x[..., 2:] - x[..., :-2]) / 2,
x[..., -1:] - x[..., -2:-1]], axis=3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ignorant question: why are you doing a [2:] offset to calculate the derivative instead of just doing a 1-pixel difference?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm doing something closer to a central difference instead of a backward or forward difference approx. It's just a little more accurate/stable.

n.b. apparently this is exactly what np.gradient does.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha that makes sense, maybe consider just adding this note for posterity?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah good idea

@@ -117,6 +117,103 @@ def __call__(self, x1, x2, sigma=1.0):
return mmd


class MaterialDerivativeLoss(tf.keras.losses.Loss):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should add a test for this

@bnb32 bnb32 merged commit d799491 into main Feb 16, 2024
8 checks passed
@bnb32 bnb32 deleted the bnb/dev branch February 16, 2024 16:02
github-actions bot pushed a commit that referenced this pull request Feb 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants