Skip to content

Commit f404ee8

Browse files
Merge branch 'HDFGroup:master' into download-redirects
2 parents 5f68ae6 + 4739b60 commit f404ee8

26 files changed

+3890
-520
lines changed

.github/workflows/codespell.yml

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# GitHub Action to automate the identification of common misspellings in text files
2+
# https://github.com/codespell-project/codespell
3+
# https://github.com/codespell-project/actions-codespell
4+
name: codespell
5+
on: [push, pull_request]
6+
permissions:
7+
contents: read
8+
jobs:
9+
codespell:
10+
name: Check for spelling errors
11+
runs-on: ubuntu-latest
12+
steps:
13+
- uses: actions/[email protected]
14+
- uses: codespell-project/actions-codespell@master
15+
with:
16+
ignore_words_list: fom,coo,ku,inout
17+
skip: .git, .github
18+

404.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,4 +5,12 @@ title: 404 - Content Not Found
55

66
# This page doesn't seem to exist
77

8+
## Search all Support Content
9+
<script async src="https://cse.google.com/cse.js?cx=f671574fa55cc44e6">
10+
</script>
11+
<div class="gcse-search"></div>
12+
13+
<br>
14+
15+
## Browse Documentation
816
You might find what you're looking for in [Documentation](/documentation/index.html).

_config.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,5 @@ defaults:
1717
path: ""
1818
values:
1919
layout: default
20+
21+
url: "https://support.hdfgroup.org"

_layouts/default.html

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
<!DOCTYPE html>
1+
<!DOCTYPE html>
22
<html lang="{{ site.lang | default: "en-US" }}">
33
<head>
44
<link href="//maxcdn.bootstrapcdn.com/font-awesome/4.2.0/css/font-awesome.min.css" rel="stylesheet">
@@ -33,6 +33,7 @@
3333
<li><a href="https://www.hdfgroup.org/solutions/priority-support/">HDF Software Priority Support</a>
3434
<li><a href="https://www.hdfgroup.org/solutions/consulting/">HDF Consulting</a></li>
3535
<li><a href="{{site.url_docs}}/archive/support/index.html">Archive</a></li>
36+
<li><a href="/search/index.html">Search</a></li>
3637
</ul>
3738
<br>
3839
<!--
@@ -69,8 +70,7 @@ <h3>Got HDF5?</h3>
6970
{% if site.github.is_project_page %}
7071
<p>This project is maintained by <a href="{{ site.github.owner_url }}">{{ site.github.owner_name }}</a></p>
7172
{% endif %}
72-
<p><small>Hosted on GitHub Pages &mdash; Theme by <a href="https://github.com/orderedlist">orderedlist</a></small></p>
73-
</footer>
73+
</footer>
7474
</div>
7575

7676
</body>

clinic/index.html

Lines changed: 16 additions & 16 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

clinic/index.org

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -185,7 +185,7 @@ understand the context.
185185
: world
186186
: hello
187187
: finished at 08:25:33
188-
- Noticable speedup but not a panacea
188+
- Noticeable speedup but not a panacea
189189
#+begin_quote
190190
Eventually I hope to have a version of =h5pyd= that supports =async= (or maybe
191191
an entirely new package), that would make it a little easier to use.
@@ -250,7 +250,7 @@ I have an application that reads groups as units and all the datasets
250250
use the contiguous data layout. I believe this option, if available,
251251
can yield a good read performance.
252252
#+end_quote
253-
- Intersting suggestions
253+
- Interesting suggestions
254254
- Elena suggested to first create all datasets with =H5D_ALLOC_TIME_LATE=
255255
- Mark suggested to =H5Pget_meta_block_size= to set aside sufficient
256256
"group space" (Maybe also =H5Pset_est_link_info=?)
@@ -674,7 +674,7 @@ int main()
674674
- A :: Currently, the code will skip entries >64K, because the underlying
675675
=H5Dcreate= will fail. (The logic and code can be much improved.) It's better
676676
to first run =archive_checker_64k= and see if there are any size warnings.
677-
The =h5compactor= and =h5shredder= were sucessfully run on TAR archives with
677+
The =h5compactor= and =h5shredder= were successfully run on TAR archives with
678678
tens of millions of small images (<64K).
679679
** Last week's highlights
680680
*** Announcements
@@ -1126,7 +1126,7 @@ However, I cannot get this to work with h5py.
11261126
#+end_quote
11271127
- [[https://meetingorganizer.copernicus.org/EGU22/EGU22-11201.html][DIARITSup: a framework to supervise live measurements, Digital Twins modelscomputations and predictions for structures monitoring.]]
11281128
#+begin_quote
1129-
DIARITSup is a chain of various softwares following the concept of ”system of
1129+
DIARITSup is a chain of various software following the concept of ”system of
11301130
systems”. It interconnects hardware and software layers dedicated to in-situ
11311131
monitoring of structures or critical components. It embeds data assimilation
11321132
capabilities combined with specific Physical or Statistical models like
@@ -1227,7 +1227,7 @@ HDFS?
12271227

12281228
** Tips, tricks, & insights
12291229
*** SWMR (old) and compression "issues" - take 2
1230-
- Free-space managment is disabled in the original SWMR implementation
1230+
- Free-space management is disabled in the original SWMR implementation
12311231
- Can lead to file bloat when compression is enabled & overly aggressive
12321232
flushing
12331233
- **Question:** How does the new SWMR VFD implementation behave?
@@ -1486,7 +1486,7 @@ DATASET "/equilibrium/vacuum_toroidal_field&b0" {
14861486
- [[https://www.youtube.com/watch?v=WlnUF3LRBj4][Awkward Array: Manipulating JSON-like Data with NumPy-like Idioms]]
14871487
- SciPy 2020 presentation by Jim Pivarski
14881488
- Watch this!
1489-
- How would you repesent something like this in HDF5? (Example from Jim's video)
1489+
- How would you represent something like this in HDF5? (Example from Jim's video)
14901490
#+begin_src python
14911491
import awkward as ak
14921492
array = ak.Array([
@@ -1708,7 +1708,7 @@ Next time...
17081708
retval = EXIT_FAILURE;
17091709
goto fail_write;
17101710
}
1711-
printf("Write successed.\n");
1711+
printf("Write succeeded.\n");
17121712

17131713
if (H5Fflush(file, H5F_SCOPE_GLOBAL) < 0) {
17141714
retval = EXIT_FAILURE;
@@ -1889,7 +1889,7 @@ This brings us to today's ...
18891889

18901890
The expression below is a [templated] Class datatype in C++, placed in a
18911891
non-contiguous memory location, requiring scatter-gather operators and a
1892-
mechanism to dis-assemble reassemble the components. Becuase of the complexity
1892+
mechanism to dis-assemble reassemble the components. Because of the complexity
18931893
AFAIK there is no automatic support for this sort of operation.
18941894

18951895
#+begin_src C++
@@ -2091,7 +2091,7 @@ the process writing to the file was killed.
20912091
- Face-to-face at [[https://www.iter.org/][ITER]] in Saint Paul-lez-Durance, France
20922092
- Reserve your spot before telling your friends! =;-)=
20932093

2094-
**** ASCR Workshop January 2022 on the Managment and Storage of Scientific Data
2094+
**** ASCR Workshop January 2022 on the Management and Storage of Scientific Data
20952095
- [[https://www.osti.gov/biblio/1843500-position-papers-ascr-workshop-management-storage-scientific-data][Position Papers]]
20962096
- [[https://www.osti.gov/biblio/1845705][Technical Report]]
20972097

@@ -2572,7 +2572,7 @@ All solutions come with different trade-offs!
25722572

25732573
** Tips, tricks, & insights
25742574
*** How do HDF5-UDF work?
2575-
Currently, they are repesented as chunked datasets with a single chunk. That's
2575+
Currently, they are represented as chunked datasets with a single chunk. That's
25762576
why they work fine with existing tools. The UDF itself is executed as part of the
25772577
HDF5 filter pipeline. Its code is stored in the dataset blob data plus metadata
25782578
and managed by the UDF handler.
@@ -2950,9 +2950,9 @@ if I set it to a big value (1024).
29502950

29512951
I wasn’t able to find how parallelism is exactly implemented. From the above
29522952
behaviour it looks like the file is being locked which then blocks my whole
2953-
programm, especially if the stride is big (more time for the other ranks to run
2954-
into a lock and be idle inbetween). Is that really the case? I write data
2955-
continously, so theoretically there is no need for a lock. Is is possible to
2953+
program, especially if the stride is big (more time for the other ranks to run
2954+
into a lock and be idle in between). Is that really the case? I write data
2955+
continuously, so theoretically there is no need for a lock. Is is possible to
29562956
tell the driver “don’t lock the file”?
29572957
#+end_quote
29582958
- What's a 'stride'? (not a hyperslab stride...)
@@ -3649,7 +3649,7 @@ fail_file:
36493649

36503650
#+END_SRC
36513651

3652-
- The ouput file produce looks like this:
3652+
- The output file produce looks like this:
36533653

36543654
#+BEGIN_EXAMPLE
36553655

@@ -3791,7 +3791,7 @@ export HDF5_USE_FILE_LOCKING="FALSE"
37913791
- "Partial I/O gets in the way" - sorting fields by name or offset
37923792
- Happens on each write call => overhead
37933793
- Can this be avoided? User might provide a patch...
3794-
**** [[https://forum.hdfgroup.org/t/read-write-specific-coordiantes-in-multi-dimensional-dataset/9137][Read/write specific coordiantes in multi-dimensional dataset?]]
3794+
**** [[https://forum.hdfgroup.org/t/read-write-specific-coordinates-in-multi-dimensional-dataset/9137][Read/write specific coordinates in multi-dimensional dataset?]]
37953795
- Thomas is looking for use cases from =h5py= users
37963796
#+BEGIN_SRC python
37973797

documentation/hdf5-docs/Migrating+from+HDF5+1.12+to+HDF5+1.14.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,4 +35,4 @@ The way optional operations are handled in the virtual object layer (VOL) change
3535
The virtual file layer has changed in HDF5 1.14.0. Existing virtual file drivers (VFDs) will have to be updated to work with this version of the library.
3636

3737
## Virtual Object Layer (VOL) Changes
38-
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors shoul be updated to work with this version of the library.
38+
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors should be updated to work with this version of the library.

documentation/hdf5-docs/hdf5_topics/DebugH5App.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ Code to accumulate statistics is included at compile time by using the --enable-
6262
| hl | No | Local heaps |
6363
| i | Yes | Interface abstraction |
6464
| mf | No | File memory management |
65-
| mm | Yes | Library memory managment |
65+
| mm | Yes | Library memory management |
6666
| o | No | Object headers and messages |
6767
| p | Yes | Property lists |
6868
| s | Yes | Data spaces |

documentation/hdf5-docs/release_specifics/Migrating_from_HDF5_1.12_to_HDF5_1.14.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,4 +35,4 @@ The way optional operations are handled in the virtual object layer (VOL) change
3535
The virtual file layer has changed in HDF5 1.14.0. Existing virtual file drivers (VFDs) will have to be updated to work with this version of the library.
3636

3737
## Virtual Object Layer (VOL) Changes
38-
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors shoul be updated to work with this version of the library.
38+
The virtual object layer has changed significantly in HDF5 1.14.0 and the 1.12 VOL API is now considered deprecated and unsupported. Existing virtual object layer connectors should be updated to work with this version of the library.

documentation/hdf5-docs/release_specifics/hdf5_1_14.md

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,10 @@ redirect_from:
1010

1111
### [New Features in HDF5 Release 1.14](new_features_1_14.md)
1212

13-
* [H5Sencode / H5Sdecode Format Change - RFC](https://docs.hdfgroup.org/hdf5/rfc/H5Sencode_format.docx.pdf)
14-
* [Update to References](https://docs.hdfgroup.org/hdf5/rfc/RFC_Update_to_HDF5_References.pdf)
15-
* [Update to Selections](https://docs.hdfgroup.org/hdf5/rfc/selection_io_RFC_210610.pdf)
16-
* [Virtual Object Layer](https://docs.hdfgroup.org/hdf5/develop/_v_o_l__connector.html)
13+
* [16 bit floating point and Complex number datatypes](https://github.com/HDFGroup/hdf5doc/blob/master/RFCs/HDF5_Library/Float16/RFC__Adding_support_for_16_bit_floating_point_and_Complex_number_datatypes_to_HDF5.pdf)
14+
* [Asynchronous I/O operations](asyn_ops_wHDF5_VOL_connectors.html)
15+
* [Subfiling VFD](https://support.hdfgroup.org/releases/hdf5/documentation/rfc/RFC_VFD_subfiling_200424.pdf)
16+
* [Onion VFD](https://support.hdfgroup.org/releases/hdf5/documentation/rfc/Onion_VFD_RFC_211122.pdf)
17+
* [Multi Dataset I/O](https://support.hdfgroup.org/releases/hdf5/documentation/rfc/H5HPC_MultiDset_RW_IO_RFC.pdf)
1718

1819
### [Software Changes from Release to Release for HDF5 1.14](sw_changes_1.14.md)

0 commit comments

Comments
 (0)