Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lazy netcdf saves #5191

Merged
merged 92 commits into from
Apr 21, 2023
Merged

Lazy netcdf saves #5191

merged 92 commits into from
Apr 21, 2023

Conversation

pp-mo
Copy link
Member

@pp-mo pp-mo commented Mar 10, 2023

Here at last, a followup to #5031 .
Note that, in addition to adding lazy saving, this adds support for the dask 'distributed' scheduler.

Unfortunately, this is still very much preliminary work, since I haven't added any testing yet
-- and doubtless various existing non-integration tests will break + need fixing, which could take some effort

It has important unfinished business :

  • no tests ! (done)
  • it now appears that, we can no longer support checking for inadvertent fill_value collisions, which is a serious backwards-incompatibily. Though possibly we can continue to do this for a threaded scheduler only. See below (done)
  • no support for a (local) process-based scheduler. Though we can't do that (i.e. save with one) at present, so maybe no problem. (done : accept + document that we do not support it)
  • we might also want to consider possible different process "start_method" settings : These are fork/spawn/forkserver as supported by multiprocessing, and dask also has a control for this -- see around "multiprocessing.context", here. (done : probably only matters for local-process-scheduling, which we are not doing)
  • whatsnew entry
  • consider a benchmark to demonstrate performance
  • consider allowing a lock to be passed in by the user - as in this comment on the original PR by fnattino

Closes #4190

@pp-mo pp-mo changed the title Lazy save 2 Lazy netcdf saves Mar 10, 2023
@pp-mo
Copy link
Member Author

pp-mo commented Mar 10, 2023

Although there are no tests yet, I have created a simple routine to demonstrate function.
In case that may be useful, I've put it here : https://gist.github.com/pp-mo/a5b1d693c4a27192e5e0e2c2c4fa5ca9

Possibly a bit poor, but there it is !

@pp-mo
Copy link
Member Author

pp-mo commented Mar 10, 2023

@pp-mo
Copy link
Member Author

pp-mo commented Mar 10, 2023

Problem with fill-value checking

I hit a glitch here due to the different behaviour of dask execution with a distributed scheduler.
With a distributed scheduler, it is no longer possible to do direct streaming of variable data (i.e. da.store(compute=True)) while the caller holds the file open for writing, as is wanted here.
That's because, it's asking another process to open the file for write, while we already have it open for write by the caller.
( And, we probably don't want to close/reopen it since that will invalidate any existing netCDF4 data objects which the Saver code probably stores + expects to continue working with ? I'm not actually sure of this, though. )

So for now, I've completely fudged the issue:

  1. simply ignore the fill-value check for lazy operations
  2. for-now actioning all saves as lazy, and simply 'completing' the non-lazy ones.

This presents us with a bit of a backwards-compatibilty problem, in that I don't see how we can't check the data for fill-value collisions in the "new world". And while that may clearly be okay when using the new lazy-saving, I'm less comfortable that it will simply behave differently with a distributed scheduler.

Another option is to continue to do the as-is inline streaming, but only if the scheduler is threaded.
But that means making the save code dependent on the scheduler, which I so far managed to avoid.

@pp-mo
Copy link
Member Author

pp-mo commented Mar 10, 2023

Rebased, and pinned libnetcdf<4.9, as found for #5187.
N.B. I somehow got the ref to that in the commit wrong
-- I will edit the commit, but let's see if it fixed the tests first ..

@pp-mo
Copy link
Member Author

pp-mo commented Mar 10, 2023

Changed message on previous commit, and added a fix.

There is one remaining test failure, but I'm suspicious that this might be a sporadic problem 😱
-- (Later update : it is not sporadic. read on)

The log looks like this ...

Click to expand this section...
2023-03-10T16:58:50.6415706Z _________________ TestNetCDFSave.test_netcdf_save_multi2single _________________
2023-03-10T16:58:50.6416608Z [gw1] linux -- Python 3.9.16 /home/runner/work/iris/iris/.nox/tests/bin/python
2023-03-10T16:58:50.6416835Z 
2023-03-10T16:58:50.6417058Z self = <iris.tests.test_netcdf.TestNetCDFSave testMethod=test_netcdf_save_multi2single>
2023-03-10T16:58:50.6417310Z 
2023-03-10T16:58:50.6417410Z     @tests.skip_data
2023-03-10T16:58:50.6417678Z     def test_netcdf_save_multi2single(self):
2023-03-10T16:58:50.6418118Z         # Test saving multiple cubes to a single CF-netCDF file.
2023-03-10T16:58:50.6418420Z         # Read PP input file.
2023-03-10T16:58:50.6418689Z         file_in = tests.get_data_path(
2023-03-10T16:58:50.6419014Z             ("PP", "cf_processing", "abcza_pa19591997_daily_29.b.pp")
2023-03-10T16:58:50.6419284Z         )
2023-03-10T16:58:50.6419789Z         cubes = iris.load(file_in)
2023-03-10T16:58:50.6420020Z     
2023-03-10T16:58:50.6420253Z         # Write Cube to netCDF file.
2023-03-10T16:58:50.6420570Z         with self.temp_filename(suffix=".nc") as file_out:
2023-03-10T16:58:50.6420888Z             # Check that it is the same on loading
2023-03-10T16:58:50.6421163Z >           iris.save(cubes, file_out)
2023-03-10T16:58:50.6421342Z 
2023-03-10T16:58:50.6421465Z lib/iris/tests/test_netcdf.py:728: 
2023-03-10T16:58:50.6421815Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
2023-03-10T16:58:50.6422089Z lib/iris/io/__init__.py:485: in save
2023-03-10T16:58:50.6422390Z     result = saver(source, target, **kwargs)
2023-03-10T16:58:50.6422804Z lib/iris/fileformats/netcdf/saver.py:2906: in save
2023-03-10T16:58:50.6423139Z     result.compute()
2023-03-10T16:58:50.6423532Z .nox/tests/lib/python3.9/site-packages/dask/base.py:314: in compute
2023-03-10T16:58:50.6423899Z     (result,) = compute(self, traverse=False, **kwargs)
2023-03-10T16:58:50.6424333Z .nox/tests/lib/python3.9/site-packages/dask/base.py:599: in compute
2023-03-10T16:58:50.6424664Z     results = schedule(dsk, keys, **kwargs)
2023-03-10T16:58:50.6425061Z .nox/tests/lib/python3.9/site-packages/dask/threaded.py:89: in get
2023-03-10T16:58:50.6425362Z     results = get_async(
2023-03-10T16:58:50.6425770Z .nox/tests/lib/python3.9/site-packages/dask/local.py:511: in get_async
2023-03-10T16:58:50.6426091Z     raise_exception(exc, tb)
2023-03-10T16:58:50.6426494Z .nox/tests/lib/python3.9/site-packages/dask/local.py:319: in reraise
2023-03-10T16:58:50.6426777Z     raise exc
2023-03-10T16:58:50.6427148Z .nox/tests/lib/python3.9/site-packages/dask/local.py:224: in execute_task
2023-03-10T16:58:50.6427473Z     result = _execute_task(task, data)
2023-03-10T16:58:50.6427876Z .nox/tests/lib/python3.9/site-packages/dask/core.py:119: in _execute_task
2023-03-10T16:58:50.6428508Z     return func(*(_execute_task(a, cache) for a in args))
2023-03-10T16:58:50.6428964Z .nox/tests/lib/python3.9/site-packages/dask/array/core.py:4394: in store_chunk
2023-03-10T16:58:50.6429357Z     return load_store_chunk(x, out, index, lock, return_stored, False)
2023-03-10T16:58:50.6429829Z .nox/tests/lib/python3.9/site-packages/dask/array/core.py:4376: in load_store_chunk
2023-03-10T16:58:50.6430144Z     out[index] = x
2023-03-10T16:58:50.6430456Z lib/iris/fileformats/netcdf/saver.py:521: in __setitem__
2023-03-10T16:58:50.6430824Z     dataset = _thread_safe_nc.DatasetWrapper(self.path, "r+")
2023-03-10T16:58:50.6431199Z lib/iris/fileformats/netcdf/_thread_safe_nc.py:55: in __init__
2023-03-10T16:58:50.6431532Z     instance = self.CONTAINED_CLASS(*args, **kwargs)
2023-03-10T16:58:50.6431896Z src/netCDF4/_netCDF4.pyx:2466: in netCDF4._netCDF4.Dataset.__init__
2023-03-10T16:58:50.6432210Z     ???
2023-03-10T16:58:50.6432485Z src/netCDF4/_netCDF4.pyx:1615: in netCDF4._netCDF4._get_format
2023-03-10T16:58:50.6432761Z     ???
2023-03-10T16:58:50.6433000Z _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
2023-03-10T16:58:50.6433173Z 
2023-03-10T16:58:50.6433245Z >   ???
2023-03-10T16:58:50.6433497Z E   RuntimeError: NetCDF: Not a valid ID
2023-03-10T16:58:50.6433667Z 
2023-03-10T16:58:50.6433804Z src/netCDF4/_netCDF4.pyx:2028: RuntimeError

@pp-mo
Copy link
Member Author

pp-mo commented Mar 10, 2023

Ok so now all the existing tests are "fixed", for python < 3.10, and those logs don't seem to show any "HDF5-DIAG" warnings output.

But we are still getting an error in the doctests, and specifically in the python3.10 tests
The problem in the doctests looks like this :

Click to expand this section...
Document: generated/api/iris/io
-------------------------------
**********************************************************************
File "../../lib/iris/io/__init__.py", line ?, in default
Failed example:
    iris.save(my_cube_list, "myfile.nc", netcdf_format="NETCDF3_CLASSIC")
Exception raised:
    Traceback (most recent call last):
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/doctest.py", line 1336, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest default[0]>", line 1, in <module>
        iris.save(my_cube_list, "myfile.nc", netcdf_format="NETCDF3_CLASSIC")
      File "/home/runner/work/iris/iris/lib/iris/io/__init__.py", line 485, in save
        result = saver(source, target, **kwargs)
      File "/home/runner/work/iris/iris/lib/iris/fileformats/netcdf/saver.py", line 2906, in save
        result.compute()
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/base.py", line 314, in compute
        (result,) = compute(self, traverse=False, **kwargs)
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/base.py", line 599, in compute
        results = schedule(dsk, keys, **kwargs)
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/threaded.py", line 89, in get
        results = get_async(
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/local.py", line 511, in get_async
        raise_exception(exc, tb)
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/local.py", line 319, in reraise
        raise exc
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/local.py", line 224, in execute_task
        result = _execute_task(task, data)
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/core.py", line 119, in _execute_task
        return func(*(_execute_task(a, cache) for a in args))
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/optimization.py", line 990, in __call__
        return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args)))
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/core.py", line 149, in get
        result = _execute_task(task, cache)
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/core.py", line 119, in _execute_task
        return func(*(_execute_task(a, cache) for a in args))
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/utils.py", line 73, in apply
        return func(*args, **kwargs)
      File "/home/runner/work/iris/iris/.nox/doctest/lib/python3.8/site-packages/dask/array/core.py", line 120, in getter
        c = a[b]
      File "/home/runner/work/iris/iris/lib/iris/fileformats/netcdf/_thread_safe_nc.py", line 320, in __getitem__
        dataset = netCDF4.Dataset(self.path)
      File "src/netCDF4/_netCDF4.pyx", line 2466, in netCDF4._netCDF4.Dataset.__init__
      File "src/netCDF4/_netCDF4.pyx", line 1615, in netCDF4._netCDF4._get_format
      File "src/netCDF4/_netCDF4.pyx", line 2028, in netCDF4._netCDF4._ensure_nc_success
    RuntimeError: NetCDF: Not a valid ID
**********************************************************************
1 items had failures:
   1 of  14 in default
14 tests in 1 items.
13 passed and 1 failed.
***Test Failed*** 1 failures.

The occurrence in the actual tests for python 3.10 looks very similar -- and is not using NETCDF3.

Click to expand this section...
_________________ TestNetCDFSave.test_netcdf_save_multi2single _________________
[gw1] linux -- Python 3.10.9 /home/runner/work/iris/iris/.nox/tests/bin/python

self = <iris.tests.test_netcdf.TestNetCDFSave testMethod=test_netcdf_save_multi2single>

    @tests.skip_data
    def test_netcdf_save_multi2single(self):
        # Test saving multiple cubes to a single CF-netCDF file.
        # Read PP input file.
        file_in = tests.get_data_path(
            ("PP", "cf_processing", "abcza_pa19591997_daily_29.b.pp")
        )
        cubes = iris.load(file_in)
    
        # Write Cube to netCDF file.
        with self.temp_filename(suffix=".nc") as file_out:
            # Check that it is the same on loading
>           iris.save(cubes, file_out)

lib/iris/tests/test_netcdf.py:728: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
lib/iris/io/__init__.py:485: in save
    result = saver(source, target, **kwargs)
lib/iris/fileformats/netcdf/saver.py:2906: in save
    result.compute()
.nox/tests/lib/python3.10/site-packages/dask/base.py:314: in compute
    (result,) = compute(self, traverse=False, **kwargs)
.nox/tests/lib/python3.10/site-packages/dask/base.py:599: in compute
    results = schedule(dsk, keys, **kwargs)
.nox/tests/lib/python3.10/site-packages/dask/threaded.py:89: in get
    results = get_async(
.nox/tests/lib/python3.10/site-packages/dask/local.py:511: in get_async
    raise_exception(exc, tb)
.nox/tests/lib/python3.10/site-packages/dask/local.py:319: in reraise
    raise exc
.nox/tests/lib/python3.10/site-packages/dask/local.py:224: in execute_task
    result = _execute_task(task, data)
.nox/tests/lib/python3.10/site-packages/dask/core.py:119: in _execute_task
    return func(*(_execute_task(a, cache) for a in args))
.nox/tests/lib/python3.10/site-packages/dask/array/core.py:4394: in store_chunk
    return load_store_chunk(x, out, index, lock, return_stored, False)
.nox/tests/lib/python3.10/site-packages/dask/array/core.py:4376: in load_store_chunk
    out[index] = x
lib/iris/fileformats/netcdf/saver.py:521: in __setitem__
    dataset = _thread_safe_nc.DatasetWrapper(self.path, "r+")
lib/iris/fileformats/netcdf/_thread_safe_nc.py:55: in __init__
    instance = self.CONTAINED_CLASS(*args, **kwargs)
src/netCDF4/_netCDF4.pyx:2466: in netCDF4._netCDF4.Dataset.__init__
    ???
src/netCDF4/_netCDF4.pyx:1615: in netCDF4._netCDF4._get_format
    ???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

>   ???
E   RuntimeError: NetCDF: Not a valid ID

src/netCDF4/_netCDF4.pyx:2028: RuntimeError

Does this seem familiar territory @trexfeathers ??
We here have :

  • hdf5 = 1.12.2
  • libnetcdf = 4.8.1
  • netCDF4 = 1.6.2

…taProxy fix; use one lock per Saver; add extra up-scaled test
@pp-mo
Copy link
Member Author

pp-mo commented Mar 13, 2023

Updated with results of investigation by @pp-mo @trexfeathers.
In short, we find that the same fix as described here likewise solves this problem.
Though we still can't figure why !

Hopefully this may fix the outstanding test failures.

@codecov
Copy link

codecov bot commented Mar 13, 2023

Codecov Report

Patch coverage: 95.90% and project coverage change: +0.01 🎉

Comparison is base (949b296) 89.31% compared to head (54ec0f8) 89.32%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #5191      +/-   ##
==========================================
+ Coverage   89.31%   89.32%   +0.01%     
==========================================
  Files          88       89       +1     
  Lines       22279    22390     +111     
  Branches     5355     5374      +19     
==========================================
+ Hits        19898    20000     +102     
- Misses       1635     1640       +5     
- Partials      746      750       +4     
Impacted Files Coverage Δ
lib/iris/fileformats/netcdf/_thread_safe_nc.py 82.51% <89.47%> (+1.06%) ⬆️
lib/iris/fileformats/netcdf/_dask_locks.py 92.68% <92.68%> (ø)
lib/iris/fileformats/netcdf/saver.py 88.99% <94.11%> (-0.93%) ⬇️
lib/iris/fileformats/_nc_load_rules/helpers.py 95.96% <100.00%> (+0.71%) ⬆️
lib/iris/fileformats/netcdf/__init__.py 100.00% <100.00%> (ø)
lib/iris/io/__init__.py 81.15% <100.00%> (+0.27%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@pp-mo
Copy link
Member Author

pp-mo commented Mar 15, 2023

@bouweandela do you have time to run your checks through this, as you did with the original #5031 ?
It would be super-valuable to know if that throws up any problems. Or not 🤞

I know the tests still aren't all passing, but it's only code coverage warnings.
We will ceratainly improve that, by adding testing, before we're done with this.

@bjlittle
Copy link
Member

bjlittle commented Apr 17, 2023

@bouweandela Note that we're pulling forward the release of iris 3.6 to June instead of September, so delayed saving may land in time for that release. See #5106

I'm sure @pp-mo can give a fuller/more accurate update on our expectation of when this will be available 👍

@pp-mo
Copy link
Member Author

pp-mo commented Apr 17, 2023

@pp-mo @trexfeathers @ESadek-MO I noticed the first release candidate of v3.5 has been cut. Do you reckon there is still a chance that this may be included in v3.5?

Sincere apologies, we had a few remaining problems with this code in our recent work and it is still not merged.
So it just missed the deadline for Iris3.5.

I think the problems are now resolved, and we expect to get it completed + merged quite soon now.
So then it will be in main, but obviously to appear in a release, it now has to wait for Iris3.6.

I've put this at No.1 on the Iris 3.6 board.
Plans for 3.6 :

  • there are already a few relatively minor changes tasked there
    ( mostly to enable Xarray bridge #4994 , but N.B. the bulk of that will now be delivered outside Iris. )
  • we intend to get v3.6 out ASAP once those things are dealt with,
    • not to wait for an arbitrary timepoint, and
    • not to add anything major to it.
  • Bigger things can be pushed to v3.7 (which is itself [scheduled for August] (🦊 v3.7.x #5209 (comment))).

Apologies again for the delay @bouweandela. We could make a special release for this, but I'm hoping that won't be necessary for your requirements ?

@pp-mo
Copy link
Member Author

pp-mo commented Apr 17, 2023

Sincere apologies, we had a few remaining problems with this code in our recent work and it is still not merged. So it just missed the deadline for Iris3.5.

Apologies for spurious mention of "1.6" and "1.7" releases -- now fixed !

@pp-mo
Copy link
Member Author

pp-mo commented Apr 17, 2023

Note on PR checking status:
There is really no point my chasing my tail to re-update latest.rst and lockfiles every time something new hits main.
I have already done this 3-4 times.

I think we can now accept that tests did pass+ will pass,
so I will just re-fix when we're ready to merge.

@bouweandela
Copy link
Member

bouweandela commented Apr 18, 2023

@bouweandela Note that we're pulling forward the release of iris 3.6 to June instead of September, so delayed saving may land in time for that release. See #5106

Great to see that the release pace of iris is increasing @bjlittle! The reduced waiting time for new features will definitely make it more attractive to contribute and easier to use.

Apologies again for the delay @bouweandela. We could make a special release for this, but I'm hoping that won't be necessary for your requirements ?

Unfortunately the ESMValTool release schedule currently has a release in June and the next release in October, so then our October release would be the first release where our users could benefit from the new feature, which would be disappointingly late. Let me check with the @ESMValGroup/esmvaltool-coreteam though, maybe we could postpone our release by a month. How sure are you that your next release will not be pushed back further? We would need at least 2 weeks between the iris release with this feature and our release for testing.

@bouweandela
Copy link
Member

Note on PR checking status:
There is really no point my chasing my tail to re-update latest.rst and lockfiles every time something new hits main.
I have already done this 3-4 times.

I think this could be remedied by setting up a merge queue, maybe that could be an interesting feature for you?

@bjlittle
Copy link
Member

Great to see that the release pace of iris is increasing @bjlittle! The reduced waiting time for new features will definitely make it more attractive to contribute and easier to use.

This is great to hear. Thanks for the feedback, it makes a big difference 👍

Unfortunately the ESMValTool release schedule currently has a release in June and the next release in October, so then our October release would be the first release where our users could benefit from the new feature, which would be disappointingly late. Let me check with the @ESMValGroup/esmvaltool-coreteam though, maybe we could postpone our release by a month. How sure are you that your next release will not be pushed back further? We would need at least 2 weeks between the iris release with this feature and our release for testing.

Again, thanks for sharing. This is really useful to know. It gives me the context I need to make some decisions about the next release.

Just so you know, we have a 2 week (9 days only, thanks to the bank holiday for King Charles III coronation) development sprint scheduled, dedicated to the iris 3.6 release from Tue 09 May to Fri 19 May. I'm very keen to keep this release extremely light in content, that way we can guarantee (as best as possible) to be nimble and not slip delivery to the right. I'm going to be pretty strict about that.

I just need to get a clear understanding from our side about the absolute minimal features that we want to include for the iris 3.6 release. At that point I'd really like to hangout with yourself @bouweandela (and anyone else from esmvaltool) to get perspective on what content you need and by when. Does that sound like a reasonable plan?

I see that your forthcoming esmvalcore 2.9.0 feature freeze is on Mon 05 Jun. That would mean you need an iris release candidate available on Mon 22 May for your 2 weeks of testing.

It would be interesting to see if we could make that happen for you guys. It's quite tight, but not impossible, and pretty much all depends on content and resourcing.

Leave it with me. I'll definitely be in touch very soon 😉

@trexfeathers
Copy link
Contributor

trexfeathers commented Apr 19, 2023

I think this could be remedied by setting up a merge queue, maybe that could be an interesting feature for you?

I believe that would still require @pp-mo to be aware of which pull requests affect latest.rst or requirements/locks/ so that the queue can be set up. And presumably if this happens multiple times, as mentioned, it would require a new queue to be set up each time.

@pp-mo
Copy link
Member Author

pp-mo commented Apr 19, 2023

Great to see that the release pace of iris is increasing @bjlittle!

Leave it with me. I'll definitely be in touch very soon 😉

Thanks people.
It does feel great to be discussing actual priority + scheduling with users !

Copy link
Member

@lbdreyer lbdreyer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no outstanding comments so I believe this is now ready to merge!

@pp-mo could you sort out the lockfiles/whatsnew then ping me to let me now that I can merge it in?

@pp-mo
Copy link
Member Author

pp-mo commented Apr 21, 2023

Phew 😅
A lot of fuss today fixing conflicts + knockons due to activity on the main branch
-- if only I'd done it yesterday !

@lbdreyer please re-review when you can

@lbdreyer lbdreyer merged commit 94e44ef into SciTools:main Apr 21, 2023
17 checks passed
@lbdreyer
Copy link
Member

Merged 🥳 with quite possibly the longest commit message I've witnessed

@rcomer rcomer mentioned this pull request Apr 21, 2023
@bouweandela
Copy link
Member

Thanks a lot for all the work on this!

tkknight added a commit to tkknight/iris that referenced this pull request Apr 22, 2023
* upstream/main:
  Updated environment lockfiles (SciTools#5270)
  Drop python3.8 support (SciTools#5269)
  build wheel from sdist, not src (SciTools#5266)
  Lazy netcdf saves (SciTools#5191)
  move setup.cfg to pyproject.toml (SciTools#5262)
  Support Python 3.11 (SciTools#5226)
  Remove Resolve test workaround (SciTools#5267)
  add missing whatsnew entry (SciTools#5265)
tkknight added a commit to tkknight/iris that referenced this pull request Apr 22, 2023
* upstream/main: (61 commits)
  Updated environment lockfiles (SciTools#5270)
  Drop python3.8 support (SciTools#5269)
  build wheel from sdist, not src (SciTools#5266)
  Lazy netcdf saves (SciTools#5191)
  move setup.cfg to pyproject.toml (SciTools#5262)
  Support Python 3.11 (SciTools#5226)
  Remove Resolve test workaround (SciTools#5267)
  add missing whatsnew entry (SciTools#5265)
  make help (SciTools#5258)
  automate pypi manifest checking (SciTools#5259)
  drop sphinxcontrib-napoleon (SciTools#5263)
  add missing test result data (SciTools#5260)
  fix indentation and remove ref to ssstack (SciTools#5256)
  review actions
  update .git-blame-ignore-revs
  adopt codespell
  Adopt sphinx design (SciTools#5127)
  Bump scitools/workflows from 2023.04.2 to 2023.04.3 (SciTools#5253)
  refresh manual pypi publish instructions (SciTools#5252)
  Updated environment lockfiles (SciTools#5250)
  ...
@pp-mo pp-mo deleted the lazy_save_2 branch April 28, 2023 13:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: 🏁 Done - v3.6.0
Development

Successfully merging this pull request may close these issues.

Pin libnetcdf Support lazy saving
10 participants