+-  __Card One__
+
+ ---
+ Card body one
+
+
+-  __Card Two__
+
+ ---
+
+ Card body two
+
+-  __Card Three__
+
+ ---
+
+ Card body three
+
+-  __Card One__
+
+ ---
+ Card body one
+
+
+-  __Card Two__
+
+ ---
+
+ Card body two
+
+-  __Card Three__
+
+ ---
+
+ Card body three
+
+
+```
+
+By default, cards will have a max of two columns, the classes `md-grid-three` and `md-grid-four` can be added to the main `
`, to increase this to three and four cards respectivly.
+
+If increasing number of cards per row consider [hiding the table of contents using `hide`](NEWPAGE.md#material-theme-parameters) to allow more room.
+
+The card format
+
+```md
+-  __Title__
+
+ --- (horizontal rule)
+ Text
+```
+
+Is not part of the grid format itself, but should be used as a standard format for cards.
+
## Macros
Macros allow use of [Jinja filter syntax](https://jinja.palletsprojects.com/en/3.1.x/templates/) _inside the markdown files_ allowing for much more flexible templating.
diff --git a/docs/General/.pages.yml b/docs/General/.pages.yml
deleted file mode 100644
index 9c85eb253..000000000
--- a/docs/General/.pages.yml
+++ /dev/null
@@ -1,6 +0,0 @@
----
-nav:
-- Announcements
-- FAQs
-- NeSI_Policies
-- Release_Notes
diff --git a/docs/General/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md b/docs/General/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md
deleted file mode 100644
index f335ffd87..000000000
--- a/docs/General/Announcements/Accessing_NeSI_Support_during_the_Easter_break.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-description: A page sharing the details of reduced support hours over Easter break
-created_at: '2024-03-20T01:58:22Z'
-hidden: false
-position: 0
-tags: []
-title: Accessing NeSI Support during the Easter break
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 9308584352783
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-During the Easter break, [NeSI platform
-services](https://status.nesi.org.nz/) will be online and available, but
-urgent / critical requests received between 5:00 pm Thursday 28 March
-and 9:00 am Wednesday 03 April will be addressed on a best effort
-basis.
-
-Below is a quick reminder of our main support channels as well as other
-sources of self-service support:
-
-- Changes to system status are reported via our [System Status
- page](https://status.nesi.org.nz/ "https://status.nesi.org.nz/").
- You can also subscribe for notifications of system updates and
- unplanned outages sent straight to your inbox. [Sign up
- here.](../../Getting_Started/Getting_Help/System_status.md)
-
-- [Consult our User
- Documentation](https://www.docs.nesi.org.nz) pages
- for instructions and guidelines for using the systems.
-
-- [Visit NeSI’s YouTube
- channel](https://www.youtube.com/playlist?list=PLvbRzoDQPkuGMWazx5LPA6y8Ji6tyl0Sp "https://www.youtube.com/playlist?list=PLvbRzoDQPkuGMWazx5LPA6y8Ji6tyl0Sp") for
- introductory training webinars.
-
-- {% include "partials/support_request.html" %} (Note:
- non-emergency requests will be addressed on or after 03 April)
-
-On behalf of the entire NeSI team, we wish you a safe and relaxing
-break.
diff --git a/docs/General/Announcements/Improvements_to_Fair_Share_job_prioritisation_on_Maui.md b/docs/General/Announcements/Improvements_to_Fair_Share_job_prioritisation_on_Maui.md
deleted file mode 100644
index 0465589b9..000000000
--- a/docs/General/Announcements/Improvements_to_Fair_Share_job_prioritisation_on_Maui.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-created_at: '2020-09-04T02:01:07Z'
-tags: []
-title: "Improvements to Fair Share job prioritisation on M\u0101ui"
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 360001829555
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-*On Thursday 3 September 2020, NeSI updated the way we prioritise jobs
-on the Māui HPC platform.*
-
-## Background
-
-Since the start of the year, we have been using Slurm's Fair Tree
-algorithm on Māui (*not yet on Mahuika*) to prioritise jobs. This
-provides a hierarchical structure to Slurm's account management, with
-the hierarchy representing shares of a total cluster under Slurm's
-control. This enables control of higher level or aggregate account
-considerations, such as ensuring a group of projects within a research
-programme or institution are ensured access to their share of a cluster.
-
-Under our Fair Tree implementation, each of [NeSI's four collaborating
-institutions](https://www.nesi.org.nz/about-us) is assigned a percentage
-share of Māui, alongside a percentage share for MBIE's Merit allocations
-(including Postgraduate and Proposal Development allocations), and the
-remainder as a share to allocations coming from subscriptions.
-
-These six shares, or what we in NeSI call national pools, are then
-ranked in order, starting with the pool that has been using at the
-lowest rate compared to its allocated percentage share. *See [this
-page](https://slurm.schedmd.com/fair_tree.html) (off site) for more
-details about Slurm's Fair Tree algorithm.*
-
-Previously, we had given each pool a hard-coded share of Māui use. These
-hard-coded shares did not reflect ongoing rounds of allocations given to
-projects, and so some researchers were suffering from deprioritised
-jobs. These jobs ended up delayed in the queue, sometimes excessively.
-
-## What has changed?
-
-We have now recalculated the shares for each pool to take into account
-the following:
-
-- The investments into HPC platforms by the various collaborating
- institutions and by MBIE;
-- The capacity of each HPC platform;
-- The split of requested time (allocations) by project teams between
- the Māui and Mahuika HPC platforms, both overall and within each
- institution's pool.
-
-Under this scheme, any job's priority is affected by the behaviour of
-other workload within the same project team, but also other project
-teams drawing on the same pool. In particular, even if your project team
-has been under-using compared to your allocation, your jobs may still be
-held up if:
-
-- Other project teams at your institution (within your pool) have been
- over-using compared to their allocations, or
-- Your institution has approved project allocations totalling more
- time than it is entitled to within its pool's share.
-
-## What will I notice?
-
-If your institution or pool's ranking has not changed, nothing much will
-immediately change for you.
-
-However, if your institution or pool's assigned share of the machine has
-increased, it will become easier to move up the priority rankings, at
-least in the short term.
-
-Conversely, if your institution or pool's assigned share of the machine
-has decreased, it will become easier to move down the rankings. This
-change is one you are more likely to notice over time.
-
-Whenever your institution or pool's ranking changes, whether because of
-usage or because we adjust the assigned shares based on ongoing rounds
-of allocations, your job priorities will alter almost immediately.
-Moving up the rankings will increase your job priorities. Moving down
-the rankings will decrease your job priorities.
-
-## What other changes are NeSI planning on making?
-
-We are looking at introducing Fair Tree on Mahuika as well, though not
-on Māui ancillary nodes. We will announce this change well ahead of any
-planned introduction.
-
-We will also adjust the assigned Fair Tree shares on Māui routinely so
-we don't diverge from allocations across HPC platforms again.
diff --git a/docs/General/Announcements/Mahuika-Core_Dumps_generation_now_disabled_as_default.md b/docs/General/Announcements/Mahuika-Core_Dumps_generation_now_disabled_as_default.md
deleted file mode 100644
index 7a6303e03..000000000
--- a/docs/General/Announcements/Mahuika-Core_Dumps_generation_now_disabled_as_default.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-created_at: '2022-07-11T23:23:04Z'
-status:
-tags:
-- mahuika
-- .core
-- corefile
-- coredump
-title: 'Mahuika: Core Dumps generation now disabled as default'
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 5126681349903
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-A Slurm configuration change has been made on Mahuika so that the
-maximum size of [core file](../FAQs/What_is_a_core_file.md) that
-can be generated inside a job now defaults to `0` bytes rather
-than `unlimited`.
-
-You can reenable core dumps with `ulimit -c unlimited` .
diff --git a/docs/General/Announcements/Mahuikas_new_Milan_CPU_nodes_open_to_all_NeSI_users.md b/docs/General/Announcements/Mahuikas_new_Milan_CPU_nodes_open_to_all_NeSI_users.md
deleted file mode 100644
index bb868c337..000000000
--- a/docs/General/Announcements/Mahuikas_new_Milan_CPU_nodes_open_to_all_NeSI_users.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-created_at: '2023-03-30T02:23:48Z'
-tags: []
-title: "Mahuika's new Milan CPU nodes open to all NeSI users"
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 6686934564239
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-Following a successful early access programme, Mahuika’s newest CPU
-nodes are now available for use by any projects that have a Mahuika
-allocation on NeSI's HPC Platform.
-
-The production launch of these new nodes is an exciting milestone in
-NeSI’s strategy to lower the carbon footprint and continually improve
-the performance and fit-for-purpose design of our platforms to meet your
-research needs.
-
-## What’s new
-
-- faster, more powerful computing, enabled by AMD 3rd Gen EPYC Milan
- architecture
-
-- specialised high-memory capabilities, allowing rapid simultaneous
- processing
-
-- improved energy efficiency - these nodes are 2.5 times more power
- efficient than Mahuika’s original Broadwell nodes
-
-How to access
-
-- Visit our Support portal for [instructions to get
- started](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
- and details of how the Milan nodes differ from Mahuika’s original
- Broadwell nodes
-
-## Learn more
-
-- [Watch this webinar](https://youtu.be/IWRZLl__uhg) sharing a quick
- overview of the new resources and some tips for making the most of
- the nodes.
-
-- Bring questions to our [weekly Online Office
- Hours](../../Getting_Started/Getting_Help/Weekly_Online_Office_Hours.md)
-
-- {% include "partials/support_request.html" %}
- any time
-
-If you have feedback on the new nodes or suggestions for improving your
-experience getting started with or using any of our systems, please [get
-in touch {% include "partials/support_request.html" %}.
diff --git a/docs/General/Announcements/Maui_upgrade_is_complete.md b/docs/General/Announcements/Maui_upgrade_is_complete.md
deleted file mode 100644
index fdd4f88de..000000000
--- a/docs/General/Announcements/Maui_upgrade_is_complete.md
+++ /dev/null
@@ -1,221 +0,0 @@
----
-created_at: '2023-03-09T02:46:57Z'
-tags: []
-title: "M\u0101ui upgrade is complete"
-vote_count: 1
-vote_sum: 1
-zendesk_article_id: 6546340907919
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-The recent upgrade of the Māui is now complete. The operating system,
-libraries, and software stack have been upgraded and rebuilt, improving
-performance and stability and enabling new capabilities.
-
-If you encounter any issues, have any questions about the upgrade, need
-help with getting your software working on the upgraded system, or have
-a suggestion for our documentation, please {% include "partials/support_request.html" %}. We are committed to
-providing you with the best computing resources possible and will do our
-best to assist you.
-
-## Why
-
-This upgrade brings Māui's operating environment up to the latest
-supported release available for Cray's XC50 supercomputing platforms,
-with performance, reliability, and security benefits. This includes more
-up-to-date tooling and libraries with associated features and
-performance benefits. This work also enables further upgrades to NeSI's
-shared HPC storage system.
-
-## Impact
-
-Please be aware that this is a major upgrade to Māui’s operating
-environment which may impact the compatibility of software compiled with
-the current toolchains and libraries, as such users should expect to
-need to test existing applications post-upgrade and in some cases
-(especially where the application is leveraging software modules on
-Māui) rebuilding will be required. Users of applications maintained as
-software modules in the NeSI software stack can expect NeSI to provide
-rebuilt and/or updated versions of these applications (though this will
-be an ongoing effort post-upgrade).
-
-The following information will help your transition from the pre-upgrade
-Māui environment to the post-upgrade one:
-
-- The three main toolchains (CrayCCE, CrayGNU and CrayIntel) have all
- been updated to release 23.02 (CrayCCE and CrayGNU) and 23.02-19
- (CrayIntel). **The previously installed versions are no longer
- available**.
-- Consequently, nearly all of the previously provided **environment
- modules have been replaced by new versions**. You can use the
- *module avail* command to see what versions of those software
- packages are now available. If your batch scripts load exact module
- versions, they will need updating.
-- The few jobs in the Slurm queue at the start of the upgrade process
- have been placed in a “user hold” state. You have the choice of
- cancelling them with *scancel <jobid>* or releasing them with
- *scontrol release <jobid>*.
-- Be aware that if you have jobs submitted that rely on any software
- built before the upgrade, there is a good chance that this software
- will not run. **We recommend rebuilding any binaries you maintain**
- before running jobs that utilise those binaries.
-- Note that Māui login does not require adding a second factor to the
- password when authenticating on the Māui login node after the first
- successful login attempt. That is, if you have successfully logged
- in using <first factor><second factor> format, no second
- factor part will be required later on.
-
-We have also updated our support documentation for Māui to reflect the
-changes, so please review it before starting any new projects.
-
-## Software Changes
-
-Software built on Māui may not work without recompilation after the
-upgrade. See the tables below for more detail regarding version changes.
-If you have any particular concerns about the impact on your work,
-please {% include "partials/support_request.html" %}.
-
-The table below outlines the known and expected Cray component changes:
-
-
-
-
-
-
-
-
-
-CLE
-Components |
-Version
-pre-upgrade
-19.04 |
-Version
-post-upgrade
-23.02 |
-
-
-Cray Developer
-Toolkit |
-19.04 |
-23.02 |
-
-
-Cray Compiling
-Environment |
-CCE 8.7.10 |
-CCE 15.0.1 |
-
-
-Cray Message
-Passing Toolkit |
-MPT 7.7.6
-PMI 5.0.14
-GA 5.3.0.10
-Cray OpenSHMEMX 8.0.1 |
-MPT 7.7.20
-PMI 5.0.17
-Cray OpenSHMEMX 9.1.2 |
-
-
-Cray Debugging
-Support Tools |
-ATP 2.13
-CCDB 3.0.4
-CTI 2.15.5
-Gdb4hpc 3.0.10
-STAT 3.0.1.3
-Valgrind4hpc 1.0.0 |
-ATP 3.14.13
-CCDB 4.12.13
-CTI 2.17.2
-Gdb4hpc 4.14.3
-STAT 4.11.13
-Valgrind4hpc 2.12.11 |
-
-
-Cray Performance
-Measurement & Analysis Tools –CPMAT (1) |
-Perftools 7.0.6
-PAPI 5.6.0.6 |
-Perftools 23.02.0
-PAPI 7.0.0.1 |
-
-
-Cray Scientific
-and Math Libraries -CSML |
-LibSci 19.02.1
-LibSci_ACC 18.12.1 (CLE 6)
-PETSc 3.9.3.0
-Trilinos 12.12.1.1
-TPSL 18.06.1
-FFTW 2.1.5.9
-FFTW 3.3.8.2 |
-Petsc 3.14.5.0
-TPSL 20.03.2
-Trilinos 12.18.1.1 |
-
-
-Cray Environment
-Setup and Compiling support -CENV |
-craype-installer1.24.5
-craypkg-gen1.3.7
-craype 2.5.18
-cray-modules 3.2.11.1
-cray-mpich-compat1.0.0-8 (patch)
-cdt-prgenv 6.0.5 |
-craypkg-gen 1.3.26
-craype 2.7.15 |
-
-
-Third party
-products |
-HDF5 1.10.2.0
-NetCDF 4.6.1.3
-parallel-NetCDF 1.8.1.4
-iobuf 2.0.8
-java jdk 1.8.0_51 (CLE 6)
-GCC 7.3.0
-GCC 8.3.0
-cray-python 2.7.15.3 & 3.6.5.3 (CLE 6)
-cray-R 3.4.2 |
-HDF5 1.12.2.3
-NetCDF 4.9.0.3
-Parallel-NetCDF 1.12.3.3
-iobuf 2.0.10
-GCC 10.3.0
-GCC 12.1.0
-cray-python 3.9.13.2
-cray-R 4.2.1.1 |
-
-
-Third Party
-Licensed Products |
-PGI 18.10 (CLE 6 only)
-TotalView 2018.3.8
-Forge 19.0.3.1 |
-Forge 21.0.3
-Totalview 2021.2.14 |
-
-
-
-
-[S-2529: XC Series Cray Programming Environment User's
-Guide](https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=a00113984en_us)
-
-[S-2559: XC Series Software Installation and Configuration Guide (CLE
-7.0.UP04 Rev
-E)](https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=sd00002132en_us)
-
-Reference:
-
-[HPE Cray Programming Environment 21.09 for Cray XC (x86)
-Systems](https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=a00118188en_us)
-
-[Cray XC (x86) Programming Environments
-19.04](https://support.hpe.com/hpesc/public/docDisplay?docId=a00114073en_us&docLocale=en_US)
-
-[Applications supported by NeSIteam](../../Scientific_Computing/Supported_Applications/index.md)
diff --git a/docs/General/Announcements/NeSI_Support_is_changing_tools.md b/docs/General/Announcements/NeSI_Support_is_changing_tools.md
deleted file mode 100644
index 0aa5bc18a..000000000
--- a/docs/General/Announcements/NeSI_Support_is_changing_tools.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-created_at: 2024-05-20
-description:
-tags: []
-search:
- boost: 0.1
----
-
-From the 29th of May, NeSI's Support team will be using a new support desk platform to accept, track, and solve inquiries and issues sent to [support@nesi.org.nz](mailto:support@nesi.org.nz). The change is part of an evolution of our tools to better support researchers using NeSI's compute platforms and data services.
-
-## How this impacts you
-
-Emailing [support@nesi.org.nz](mailto:support@nesi.org.nz) is the most common way to connect with our Support team. You can ask us questions, let us know of issues or challenges you're having with systems or services, and action tasks related to your account and allocation(s).
-The process of contacting our Support team won't change much (see below for more details), but behind the scenes, the new ticketing system will allow us to more effectively respond to your requests for help and suggestions for service improvements.
-
-## What is changing
-
-* Replies to support tickets will come from 'support@cloud.nesi.org.nz'.
-* We are reviewing the value of having a separate portal where you can view your past tickets, open tickets or raise new tickets.
-Tell us what you think using this [form](https://docs.google.com/forms/d/e/1FAIpQLSdvR-0kJxunSiKUYNtHsG6l7Ne9Q5KPeunCVJiSbMuTvGcS8A/viewform) or by sending an email to [support@nesi.org.nz](mailto:support@nesi.org.nz).
-
-## What stays the same
-
-* Requests for support or questions about NeSI platforms and services can still be sent by email to [support@nesi.org.nz](mailto:support@nesi.org.nz). These will raise new support tickets for response from a member of our Support Team.
-* All your current tickets will stay open. Any requests you currently have in the queue will be migrated over to the new support desk platform and solved from there.
-
-## Documentation Changes
-
-Our support documentation is now hosted at [docs.nesi.org.nz](https://docs.nesi.org.nz).
-We made the shift to improve maintainability, openness, and collaboration around our support documentation. We shared more details [in this announcement](https://docs.nesi.org.nz/General/Announcements/Upcoming_changes_to_NeSI_documentation/).
-We would love to hear your feedback on the new documentation pages. Let us know your thoughts [via this form](https://docs.google.com/forms/d/e/1FAIpQLSdBNPmOEy-SqUmktZaoaMXs2VO31W3DaAh6Py_lNf1Td2VBfA/viewform) or by emailing [support@nesi.org.nz](mailto:support@nesi.org.nz)
-
-Thank you for your patience while we make these changes. We're working to ensure responses to support requests are not overly delayed during the switchover. In general, we strive to reply to support requests within one business day of receiving a message.
-
-If you have any questions at any time, send an email to [support@nesi.org.nz](mailto:support@nesi.org.nz) or pop into our [online Weekly Office Hours](https://docs.nesi.org.nz/Getting_Started/Getting_Help/Weekly_Online_Office_Hours/) to chat one-on-one with a member of our Support team.
diff --git a/docs/General/Announcements/Slurm_upgrade_to_version_21-8.md b/docs/General/Announcements/Slurm_upgrade_to_version_21-8.md
deleted file mode 100644
index 412f7a8f0..000000000
--- a/docs/General/Announcements/Slurm_upgrade_to_version_21-8.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-created_at: '2022-03-22T02:16:17Z'
-tags:
-- general
-title: Slurm upgrade to version 21.8
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 4544913401231
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-- Added `--me` option, equivalent to` --user=$USER`.
-- Added "pendingtime" as a option for --Format.
-- Put sorted start times of "N/A" or 0 at the end of the list.
-
-
-
-- Add time specification: "now-" (i.e. subtract from the present)
-- AllocGres and ReqGres were removed. Alloc/ReqTres should be used
- instead.
-
-
-
-- MAGNETIC flag on reservations. Reservations the user doesn't have to
- even request.
-- The LicensesUsed line has been removed from `scontrol show config` .
- Please use updated `scontrol show licenses` command as an
- alternative.
-
-
-
-- `--threads-per-core` now influences task layout/binding, not just
- allocation.
-- `--gpus-per-node` can be used instead of `--gres=GPU`
-- `--hint=nomultithread` can now be replaced
- with `--threads-per-core=1`
-- The inconsistent terminology and environment variable naming for
- Heterogeneous Job ("HetJob") support has been tidied up.
-- The correct term for these jobs are "HetJobs", references to
- "PackJob" have been corrected.
-- The correct term for the separate constituent jobs are
- "components", references to "packs" have been corrected.
-
-
-
-- Added support for an "Interactive Step", designed to be used with
- salloc to launch a terminal on an allocated compute node
- automatically. Enable by setting "use\_interactive\_step" as part of
- LaunchParameters.
-
-
-
-- By default, a step started with srun will be granted exclusive (or
- non- overlapping) access to the resources assigned to that step. No
- other parallel step will be allowed to run on the same resources at
- the same time. This replaces one facet of the '--exclusive' option's
- behavior, but does not imply the '--exact' option described below.
- To get the previous default behavior - which allowed parallel steps
- to share all resources - use the new srun '--overlap' option.
-- In conjunction to this non-overlapping step allocation behavior
- being the new default, there is an additional new option for step
- management '--exact', which will allow a step access to only those
- resources requested by the step. This is the second half of the
- '--exclusive' behavior. Otherwise, by default all non-gres resources
- on each node in the allocation will be used by the step, making it
- so no other parallel step will have access to those resources unless
- both steps have specified '--overlap'.
-
-
-
-- New command which permits crontab-compatible job scripts to be
- defined. These scripts will recur automatically (at most) on the
- intervals described.
diff --git a/docs/General/Announcements/University_of_Auckland_ANSYS_users.md b/docs/General/Announcements/University_of_Auckland_ANSYS_users.md
deleted file mode 100644
index 6c0ff4fca..000000000
--- a/docs/General/Announcements/University_of_Auckland_ANSYS_users.md
+++ /dev/null
@@ -1,26 +0,0 @@
----
-created_at: '2021-04-03T22:28:54Z'
-tags: []
-title: University of Auckland - ANSYS users
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 360003984776
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-
-On 01/04/2021 afternoon, there was a change to the University ANSYS
-licences; you may find that your jobs fail with a licence error.
-
-The following command should resolve the issue (where `-revn 202` is
-replaced with the version you use).
-
-``` sl
-module load ANSYS/2020R2
-ansysli_util -revn 202 -deleteuserprefs
-```
-
-The effect this will have on all of the ANSYS products is yet to be
-determined, so please {% include "partials/support_request.html" %} if you encounter problems.
diff --git a/docs/General/Announcements/Upcoming_changes_to_NeSI_documentation.md b/docs/General/Announcements/Upcoming_changes_to_NeSI_documentation.md
deleted file mode 100644
index 4a2b512b5..000000000
--- a/docs/General/Announcements/Upcoming_changes_to_NeSI_documentation.md
+++ /dev/null
@@ -1,63 +0,0 @@
----
-created_at: '2024-03-25T01:25:49Z'
-hidden: false
-position: 0
-status: new
-tags:
-- announcement
-title: Upcoming changes to NeSI documentation
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 9367039527823
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-Over the next few months NeSI will be making the shift to a new
-framework for our support documentation.
-
-The content you know and love will be unchanged, but things will look a
-bit different.
-
-We also have a new domain,
-[docs.nesi.org.nz](https://www.docs.nesi.org.nz/?utm_source=announcement)
-where you can browse the 'new' docs now.
-
-[support.nesi.org.nz](https://support.nesi.org.nz) will continue
-displaying the 'old' docs for a bit longer while we ensure everything is
-working as it should.
-
-## Why?
-
-**Maintainability:** Due to the large number of pages hosted, keeping
-information up to date requires a lot of time and effort. The new system
-will make this easier.
-
-**Openness:** We are moving from a proprietary closed source content
-management system to a community maintained open source one based on
-[mkdocs](https://www.mkdocs.org/).
-
-**Collaboration:** Our new docs are publicly hosted on GitHub, meaning
-anyone can view, copy, and suggest changes to the source material. This
-will help ensure our documentation is more accessible and responsive to
-community needs.
-
-**Pretty:** This one is subjective, but we think the docs have never
-looked better.
-
-## What will happen to old links?
-
-[support.nesi.org.nz](https://support.nesi.org.nz) will not be going
-anywhere, ***any links you have saved will continue to work.***
-
-## What do I need to do?
-
-*Nothing at all.*
-
-If you like trying new things, you can see our new docs at
-[docs.nesi.org.nz](https://www.docs.nesi.org.nz/?utm_source=announcement)
-
-We would love to hear your feedback via
-[form](https://docs.google.com/forms/d/e/1FAIpQLSdBNPmOEy-SqUmktZaoaMXs2VO31W3DaAh6Py_lNf1Td2VBfA/viewform?usp=sf_link)
-or {% include "partials/support_request.html" %}
diff --git a/docs/General/Announcements/Upcoming_webinar-Tips_for_making_the_most_of_Mahuikas_new_Milan_nodes.md b/docs/General/Announcements/Upcoming_webinar-Tips_for_making_the_most_of_Mahuikas_new_Milan_nodes.md
deleted file mode 100644
index 35c99b438..000000000
--- a/docs/General/Announcements/Upcoming_webinar-Tips_for_making_the_most_of_Mahuikas_new_Milan_nodes.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-created_at: '2023-03-28T22:23:54Z'
-tags: []
-title: "Upcoming webinar: Tips for making the most of Mahuika\u2019s new Milan nodes"
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 6678260710031
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-
-In late 2022, the Mahuika cluster was expanded to allow a wider range of
-research communities to adopt HPC approaches and build digital skills
-within their research teams.
-
-Join us on Thursday 30 March for a short webinar sharing some practical
-tips and tricks for making the most of these new resources:
-
-**Making the most of Mahuika's new Milan nodes
-Thursday 30 March**
-**11:30 am - 12:00 pm**
-**[Click here to
-RSVP](https://www.eventbrite.co.nz/e/webinar-making-the-most-of-mahuikas-new-milan-nodes-registration-557428302057)**
-
-*Background:*
-Following a successful early access programme, Mahuika’s newest CPU
-nodes are now available for use by any projects that have a Mahuika
-allocation on NeSI's HPC Platform. The production launch of these new
-nodes is an exciting milestone in NeSI’s strategy to lower the carbon
-footprint and continually improve the performance and fit-for-purpose
-design of our platforms to meet your research needs.
-
-*What’s new*
-
-- faster, more powerful computing, enabled by AMD 3rd Gen EPYC Milan
- architecture
-
-- specialised high-memory capabilities, allowing rapid simultaneous
- processing
-
-- improved energy efficiency - these nodes are 2.5 times more power
- efficient than Mahuika’s original Broadwell nodes
-
-Come along to [this
-webinar](https://www.eventbrite.co.nz/e/webinar-making-the-most-of-mahuikas-new-milan-nodes-registration-557428302057)
-to learn more and to ask questions about how your research project can
-use these powerful resources.
-
-***About the speaker***
-
-Alexander Pletzer is a Research Software Engineer working for NeSI at
-NIWA. Alex helps researchers run better and faster on NeSI platforms.
-
-***More Information***
-
-If you're unable to join us for this session but have questions about
-the Milan nodes or would like more information, come along to one of our
-[weekly Online Office
-Hours](../../Getting_Started/Getting_Help/Weekly_Online_Office_Hours.md)
-or email
anytime.
diff --git a/docs/General/Announcements/Visual_Studio_Code_Remote-Latest_Version_Not_Supported_UPDATE.md b/docs/General/Announcements/Visual_Studio_Code_Remote-Latest_Version_Not_Supported_UPDATE.md
deleted file mode 100644
index 1ed2a8591..000000000
--- a/docs/General/Announcements/Visual_Studio_Code_Remote-Latest_Version_Not_Supported_UPDATE.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-created_at: '2024-02-07T20:23:09Z'
-hidden: false
-position: 1
-status: new
-tags:
-- announcement
-title: 'Visual Studio Code Remote: Latest Version Not Supported (UPDATE)'
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 8974326930319
-zendesk_section_id: 200732737
-search:
- boost: 0.1
----
-
-The latest version of Visual Studio Code (1.86.0) released in January
-2024 requires a later version of GLIBC than is currently available on
-the NeSI login nodes.
-
-For the moment please roll back to the [previous release
-(1.8.5)](https://code.visualstudio.com/updates/v1_85).
-
-You will also have to roll back the 'Remote - SSH' plugin. This can be
-done by selecting the plugin in the Extension Marketplace, clicking on
-the 'Uninstall' drop down and choosing 'Install another version'.
-
-## Update: 09/02/2024
-
-Due to the amount of [feedback on the glibc
-change](https://github.com/microsoft/vscode/issues/204658) the VSCode
-team have said that **future versions will allow you to connect with a
-warning instead.**
-
-
-
-You can get the fix in a [pre-release build
-(1.86.1)](https://github.com/microsoft/vscode/releases/tag/1.86.1), or
-wait for the next stable release in March.
diff --git a/docs/General/Release_Notes/index.md b/docs/General/Release_Notes/index.md
deleted file mode 100644
index d0908b597..000000000
--- a/docs/General/Release_Notes/index.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-created_at: '2021-02-23T19:52:34Z'
-tags: []
-vote_count: 0
-vote_sum: 0
-title: Release Notes
-zendesk_article_id: 360003507115
-zendesk_section_id: 360000437436
----
-
-NeSI publishes release notes for applications, 3rd party applications
-and NeSI services. This section will function as a directory to find all
-published release note articles with the label 'releasenote'.
-
-## NeSI applications
-
-You can find published release notes for NeSI applications in the
-context of the structure of our documentation.
-Product context > release notes section > versioned release note
-
-Example: Release Notes Long-Term Storage can
-be located under Storage, Long-Term Storage
-
-## 3rd party applications
-
-3rd party applications listed under [Supported Applications](../../Scientific_Computing/Supported_Applications/index.md)
-have child pages with details about the available versions on NeSI, and
-a reference to the vendor release notes or documentation.
-
-## NeSI services
-
-Jupyter on NeSI is a recent example of a service composed of multiple
-components and dependencies that NeSI maintains.
diff --git a/docs/Getting_Started/.pages.yml b/docs/Getting_Started/.pages.yml
deleted file mode 100644
index 0c149a581..000000000
--- a/docs/Getting_Started/.pages.yml
+++ /dev/null
@@ -1,9 +0,0 @@
----
-nav:
- - Accounts, Projects and Allocations : Accounts-Projects_and_Allocations
- - Accessing_the_HPCs
- - Next_Steps
- - Getting_Help
- - Cheat_Sheets
- - ...
- - my.nesi.org.nz: my-nesi-org-nz
\ No newline at end of file
diff --git a/docs/Getting_Started/Accessing_the_HPCs/.pages.yml b/docs/Getting_Started/Accessing_the_HPCs/.pages.yml
deleted file mode 100644
index a2df11ddf..000000000
--- a/docs/Getting_Started/Accessing_the_HPCs/.pages.yml
+++ /dev/null
@@ -1,8 +0,0 @@
----
-nav:
- - Setting_Up_and_Resetting_Your_Password.md
- - Setting_Up_Two_Factor_Authentication.md
- - Choosing_and_Configuring_Software_for_Connecting_to_the_Clusters.md
- - X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md
- - Port_Forwarding.md
- - ...
diff --git a/docs/Getting_Started/Accessing_the_HPCs/X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md
deleted file mode 100644
index a037935e5..000000000
--- a/docs/Getting_Started/Accessing_the_HPCs/X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-created_at: '2021-10-04T19:55:45Z'
-tags:
-- x11
-- x forwarding
-- x-forwarding
-title: X-Forwarding using the Ubuntu Terminal on Windows
-vote_count: 10
-vote_sum: -6
-zendesk_article_id: 4407442866703
-zendesk_section_id: 360000034315
----
-
-
-1. [Download and install Xming from here](https://sourceforge.net/projects/xming/). Don't install an SSH
- client when prompted during the installation, if you are prompted
- for Firewall permissions after installing Xming close the window
- without allowing any Firewall permissions.
-2. Open your Ubuntu terminal and install x11-apps with the command:
- `sudo apt install x11-apps -y`.
-3. Restart your terminal, start your Xming (there should be a desktop
- icon after installing it). You should now be able to X-Forward
- displays from the HPC when you log in (assuming you have completed
- the
- [terminal setup instructions found here](../../Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md)).
diff --git a/docs/Getting_Started/Getting_Help/.pages.yml b/docs/Getting_Started/Getting_Help/.pages.yml
deleted file mode 100644
index 704d239ad..000000000
--- a/docs/Getting_Started/Getting_Help/.pages.yml
+++ /dev/null
@@ -1,8 +0,0 @@
----
-nav:
- - NeSI_wide_area_network_connectivity.md
- - Weekly_Online_Office_Hours.md
- - Consultancy.md
- - Job_efficiency_review.md
- - System_status.md
- - Introductory_Material.md
\ No newline at end of file
diff --git a/docs/Getting_Started/Getting_Help/Consultancy.md b/docs/Getting_Started/Getting_Help/Consultancy.md
deleted file mode 100644
index 78e7441f9..000000000
--- a/docs/Getting_Started/Getting_Help/Consultancy.md
+++ /dev/null
@@ -1,163 +0,0 @@
----
-created_at: '2019-02-07T21:55:45Z'
-tags:
-- help
-vote_count: 1
-vote_sum: 1
-zendesk_article_id: 360000751916
-zendesk_section_id: 360000164635
----
-
-NeSI's Consultancy service provides scientific and HPC-focussed
-computational and data science support to research projects across a
-range of domains.
-
-## Need support with your research project?
-
-If you would like to learn more about NeSI's Consultancy service and how
-you can work with NeSI's Research Software and Data Science Engineers on
-a project, please {% include "partials/support_request.html" %} to set up an
-initial meeting. We can discuss your needs and complete a Consultancy
-application form together.
-
-Researchers from NeSI collaborator institutions (University of Auckland,
-NIWA, University of Otago and Manaaki Whenua - Landcare Research) and
-those with Merit projects can usually access consultancy at no cost to
-themselves, based on their institution's or MBIE's investment into NeSI.
-
-## What do we do?
-
-The NeSI team are available to help with any stage of your research
-software development. We can get involved with designing and developing
-your software from scratch, or assist with improving software you have
-already written.
-
-The service is completely bespoke and tailored to your requirements.
-Some examples of outcomes we could assist with (this list is general and
-non-exhaustive):
-
-- Code development
- - Design and develop research software from scratch
- - Algorithmic improvements
- - Translate Python/R/Matlab code to C/C++/Fortran for faster
- execution
- - Accelerate code by offloading computations to a GPU
- - Develop visualisation and post-processing tools (GUIs, dashboards, etc)
-- Performance improvement
- - Code optimisation – profile and improve efficiency (speed and
- memory), IO performance
- - Parallelisation – software (OpenMP, MPI, etc.) and workflow
- parallelisation
-- Improve software sustainability (version control, testing,
- continuous integration, etc)
-- Data Science Engineering
- - Optimise numerical performance of machine learning pipelines
- - Conduct an Exploratory Data Analysis
- - Assist with designing and fitting explanatory and predictive
- models
-- Anything else you can think of ;-)
-
-## What can you expect from us?
-
-During a consultancy project we aim to provide:
-
-- Expertise and advice
-- An agreed timeline to develop or improve a solution (typical
- projects are of the order of 1 day per week for up to 4 months but
- this is determined on a case-by-case basis)
-- Training, knowledge transfer and/or capability development
-- A summary document outlining what has been achieved during the
- project
-- A case study published on our website after the project has been
- completed, to showcase the work you are doing on NeSI
-
-## What is expected of you?
-
-Consultancy projects are intended to be a collaboration and thus some
-input is required on your part. You should be willing to:
-
-- Contribute to a case study upon successful completion of the
- consultancy project
-- Complete a short survey to help us measure the impact of our service
-- Attend regular meetings (usually via video conference)
-- Invest time to answer questions, provide code and data as necessary
- and make changes to your workflow if needed
-- [Acknowledge](https://www.nesi.org.nz/services/high-performance-computing/guidelines/acknowledgement-and-publication)
- NeSI in article and code publications that we have contributed to,
- which could include co-authorship if our contribution is deemed
- worthy
-- Accept full ownership/maintenance of the work after the project
- completes (NeSI's involvement in the project is limited to the
- agreed timeline)
-
-## Previous projects
-
-Listed below are some examples of previous projects we have contributed
-to:
-
-- [A quantum casino helps define atoms in the big
- chill](https://www.nesi.org.nz/case-studies/quantum-casino-helps-define-atoms-big-chill)
-- [Using statistical models to help New Zealand prepare for large
- earthquakes](https://www.nesi.org.nz/case-studies/using-statistical-models-help-new-zealand-prepare-large-earthquakes)
-- [Improving researchers' ability to access and analyse climate model
- data
- sets](https://www.nesi.org.nz/case-studies/improving-researchers-ability-access-and-analyse-climate-model-data-sets)
-- [Speeding up the post-processing of a climate model data
- pipeline](https://www.nesi.org.nz/case-studies/speeding-post-processing-climate-model-data-pipeline)
-- [Overcoming data processing overload in scientific web mapping
- software](https://www.nesi.org.nz/case-studies/overcoming-data-processing-overload-scientific-web-mapping-software)
-- [Visualising ripple effects in riverbed sediment
- transport](https://www.nesi.org.nz/case-studies/visualising-ripple-effects-riverbed-sediment-transport)
-- [New Zealand's first national river flow forecasting system for
- flooding
- resilience](https://www.nesi.org.nz/case-studies/new-zealand%E2%80%99s-first-national-river-flow-forecasting-system-flooding-resilience)
-- [A fast model for predicting floods and storm
- damage](https://www.nesi.org.nz/case-studies/fast-model-predicting-floods-and-storm-damage)
-- [How multithreading and vectorisation can speed up seismic
- simulations by
- 40%](https://www.nesi.org.nz/case-studies/how-multithreading-and-vectorisation-can-speed-seismic-simulations-40)
-- [Machine learning for marine
- mammals](https://www.nesi.org.nz/case-studies/machine-learning-marine-mammals)
-- [Parallel processing for ocean
- life](https://www.nesi.org.nz/case-studies/parallel-processing-ocean-life)
-- [NeSI support helps keep NZ rivers
- healthy](https://www.nesi.org.nz/case-studies/nesi-support-helps-keep-nz-rivers-healthy)
-- [Heating up nanowires with
- HPC](https://www.nesi.org.nz/case-studies/heating-nanowires-hpc)
-- [The development of next generation weather and climate models is
- heating
- up](https://www.nesi.org.nz/case-studies/development-next-generation-weather-and-climate-models-heating)
-- [Understanding the behaviours of
- light](https://www.nesi.org.nz/case-studies/understanding-behaviours-light)
-- [Getting closer to more accurate climate predictions for New
- Zealand](https://www.nesi.org.nz/case-studies/getting-closer-more-accurate-climate-predictions-new-zealand)
-- [Fractal analysis of brain signals for autism spectrum
- disorder](https://www.nesi.org.nz/case-studies/fractal-analysis-brain-signals-autism-spectrum-disorder)
-- [Optimising tools used for genetic
- analysis](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis)
-- [Investigating climate
- sensitivity](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis)
-- [Tracking coastal precipitation systems in the
- tropics](https://www.nesi.org.nz/case-studies/tracking-coastal-precipitation-systems-tropics)
-- [Powering global climate
- simulations](https://www.nesi.org.nz/case-studies/powering-global-climate-simulations)
-- [Optimising tools used for genetic
- analysis](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis)
-- [Investigating climate
- sensitivity](https://www.nesi.org.nz/case-studies/investigating-climate-sensitivity)
-- [Improving earthquake forecasting
- methods](https://www.nesi.org.nz/case-studies/improving-earthquake-forecasting-methods)
-- [Modernising models to diagnose and treat disease and
- injury](https://www.nesi.org.nz/case-studies/modernising-models-diagnose-and-treat-disease-and-injury)
-- [Cataloguing NZ's earthquake
- activities](https://www.nesi.org.nz/case-studies/cataloguing-nz%E2%80%99s-earthquake-activities)
-- [Finite element modelling of biological
- cells](https://www.nesi.org.nz/case-studies/finite-element-modelling-biological-cells)
-- [Preparing New Zealand to adapt to climate
- change](https://www.nesi.org.nz/case-studies/preparing-new-zealand-adapt-climate-change)
-- [Using GPUs to expand our understanding of the solar
- system](https://www.nesi.org.nz/case-studies/using-gpus-expand-our-understanding-solar-system)
-- [Speeding up Basilisk with
- GPGPUs](https://www.nesi.org.nz/case-studies/speeding-basilisk-gpgpus)
-- [Helping communities anticipate flood
- events](https://www.nesi.org.nz/case-studies/helping-communities-anticipate-flood-events)
diff --git a/docs/Getting_Started/Next_Steps/.pages.yml b/docs/Getting_Started/Next_Steps/.pages.yml
deleted file mode 100644
index c0dbb3681..000000000
--- a/docs/Getting_Started/Next_Steps/.pages.yml
+++ /dev/null
@@ -1,7 +0,0 @@
----
-nav:
- - Moving_files_to_and_from_the_cluster.md
- - Submitting_your_first_job.md
- - Parallel_Execution.md
- - Finding_Job_Efficiency.md
- - ...
\ No newline at end of file
diff --git a/docs/Getting_Started/Next_Steps/The_HPC_environment.md b/docs/Getting_Started/Next_Steps/The_HPC_environment.md
deleted file mode 100644
index 9299a758e..000000000
--- a/docs/Getting_Started/Next_Steps/The_HPC_environment.md
+++ /dev/null
@@ -1,23 +0,0 @@
----
-created_at: '2019-08-16T01:22:03Z'
-tags: []
-vote_count: 0
-vote_sum: 0
-zendesk_article_id: 360001113076
-zendesk_section_id: 360000189716
----
-
-## Environment Modules
-
-Modules are a convenient way to provide access to applications on the cluster.
-They prepare the environment you need to run an application.
-
-For a full list of module commands run `man module` or visit the [lmod documentation](https://lmod.readthedocs.io/en/latest/010_user.html).
-
-| Command | Description |
-|-------------------------------|---------------------------------------------------------------|
-| `module spider` | Lists all available modules. (only Mahuika) |
-| `module spider [module name]` | Searches available modules for \[module name\] (only Mahuika) |
-| `module show [module name]` | Shows information about \[module name\] |
-| `module load [module name]` | Loads \[module name\] |
-| `module list [module name]` | Lists currently loaded modules. |
diff --git a/docs/High_Performance_Computing/.pages.yml b/docs/High_Performance_Computing/.pages.yml
new file mode 100644
index 000000000..9369590e3
--- /dev/null
+++ b/docs/High_Performance_Computing/.pages.yml
@@ -0,0 +1,9 @@
+nav:
+ - index.md
+ - Mahuika_Cluster
+ - Data_Management
+ - Software
+ - Batch_Computing
+ - Open_OnDemand
+ - Parallel_Computing
+ - ...
diff --git a/docs/High_Performance_Computing/Batch_Computing/.pages.yml b/docs/High_Performance_Computing/Batch_Computing/.pages.yml
new file mode 100644
index 000000000..58088b568
--- /dev/null
+++ b/docs/High_Performance_Computing/Batch_Computing/.pages.yml
@@ -0,0 +1,2 @@
+nav:
+ - ...
diff --git a/docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md b/docs/High_Performance_Computing/Batch_Computing/Finding_Job_Efficiency.md
similarity index 94%
rename from docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md
rename to docs/High_Performance_Computing/Batch_Computing/Finding_Job_Efficiency.md
index 3d8ac11e2..e954c7381 100644
--- a/docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md
+++ b/docs/High_Performance_Computing/Batch_Computing/Finding_Job_Efficiency.md
@@ -131,7 +131,7 @@ the compute node where it it running.
If 'nodelist' is not one of the fields in the output of your `sacct` or
`squeue` commands you can find the node a job is running on using the
command; `squeue -h -o %N -j ` The node will look something
-like `wbn123` on Mahuika or `nid00123` on Māui
+like `wbn123` on Mahuika.
!!! Note
If your job is using MPI it may be running on multiple nodes
@@ -159,7 +159,7 @@ parent process).
Processes in green can be ignored
-
+
**RES** - Current memory being used (same thing as 'RSS' from sacct)
@@ -185,16 +185,16 @@ time* the CPUs are in use. This is not enough to get a picture of
overall job efficiency, as required CPU time *may vary by number of
CPU*s.
-The only way to get the full context, is to compare walltime performance between jobs at different scale. See [Job Scaling](../../Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md) for more details.
+The only way to get the full context, is to compare walltime performance between jobs at different scale. See [Job Scaling](../Parallel_Computing/Job_Scaling_Ascertaining_job_dimensions.md) for more details.
### Example
-
+
From the above plot of CPU efficiency, you might decide a 5% reduction
of CPU efficiency is acceptable and scale your job up to 18 CPU cores .
-
+
However, when looking at a plot of walltime it becomes apparent that
performance gains per CPU added drop significantly after 4 CPUs, and in
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md b/docs/High_Performance_Computing/Batch_Computing/GPU_use_on_NeSI.md
similarity index 81%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md
rename to docs/High_Performance_Computing/Batch_Computing/GPU_use_on_NeSI.md
index 18a49d380..97e7db3ea 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md
+++ b/docs/High_Performance_Computing/Batch_Computing/GPU_use_on_NeSI.md
@@ -12,23 +12,22 @@ please have a look at the dedicated pages listed at the end of this
page.
!!! warning
- An overview of available GPU cards is available in the [Available GPUs on NeSI](../../Scientific_Computing/The_NeSI_High_Performance_Computers/Available_GPUs_on_NeSI.md)
+ An overview of available GPU cards is available in the [Available GPUs on NeSI](Available_GPUs_on_NeSI.md)
support page.
Details about GPU cards for each system and usage limits are in the
- [Mahuika Slurm Partitions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md)
- and [Māui\_Ancil (CS500) Slurm Partitions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Maui_Slurm_Partitions.md)
+ [Mahuika Slurm Partitions](Mahuika_Slurm_Partitions.md)
support pages.
Details about pricing in terms of compute units can be found in the
- [What is an allocation?](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
+ [What is an allocation?](What_is_an_allocation.md)
page.
!!! note
Recall, memory associated with the GPUs is the VRAM, and is a separate resource from the RAM requested by Slurm. The memory values listed below are VRAM values. For available RAM on the GPU nodes, please see
- [Mahuika Slurm Partitions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md).
+ [Mahuika Slurm Partitions](Mahuika_Slurm_Partitions.md).
## Request GPU resources using Slurm
-To request a GPU for your [Slurm job](../../Getting_Started/Next_Steps/Submitting_your_first_job.md), add
+To request a GPU for your [Slurm job](Submitting_your_first_job.md), add
the following option at the beginning of your submission script:
```sl
@@ -98,7 +97,7 @@ cases:
#SBATCH --gpus-per-node=A100:1
```
- *These GPUs are on Milan nodes, check the [dedicated support page](../Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
+ *These GPUs are on Milan nodes, check the [dedicated support page](Milan_Compute_Nodes.md)
for more information.*
- 4 A100 (80GB & NVLink) GPU on Mahuika
@@ -108,7 +107,7 @@ cases:
#SBATCH --gpus-per-node=A100:4
```
- *These GPUs are on Milan nodes, check the [dedicated support page](../Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md)
+ *These GPUs are on Milan nodes, check the [dedicated support page](Milan_Compute_Nodes.md)
for more information.*
*You cannot ask for more than 4 A100 (80GB) GPUs per node on
@@ -126,7 +125,7 @@ cases:
regular Mahuika node (A100 40GB GPU) or on a Milan node (A100 80GB
GPU).*
-You can also use the `--gpus-per-node`option in [Slurm interactive sessions](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md),
+You can also use the `--gpus-per-node`option in [Slurm interactive sessions](Slurm_Interactive_Sessions.md),
with the `srun` and `salloc` commands. For example:
``` sh
@@ -156,7 +155,7 @@ duration of 30 minutes.
## Load CUDA and cuDNN modules
To use an Nvidia GPU card with your application, you need to load the
-driver and the CUDA toolkit via the [environment modules](../HPC_Software_Environment/Finding_Software.md)
+driver and the CUDA toolkit via the [environment modules](Finding_Software.md)
mechanism:
``` sh
@@ -172,9 +171,6 @@ module spider CUDA
Please{% include "partials/support_request.html" %} if you need a version not
available on the platform.
-!!! note
- On Māui Ancillary Nodes, use `module avail CUDA` to list available
- versions.
The CUDA module also provides access to additional command line tools:
@@ -326,8 +322,8 @@ graphical interface.
!!! warning
The `nsys-ui` and `ncu-ui` tools require access to a display server,
either via
- [X11](../../Scientific_Computing/Terminal_Setup/X11_on_NeSI.md) or a
- [Virtual Desktop](../../Scientific_Computing/Interactive_computing_using_Jupyter/Virtual_Desktop_via_Jupyter_on_NeSI.md).
+ [X11](X11_on_NeSI.md) or a
+ [Virtual Desktop](Virtual_Desktop_via_Jupyter_on_NeSI.md).
You also need to load the `PyQt` module beforehand:
```sh
@@ -341,14 +337,14 @@ graphical interface.
The following pages provide additional information for supported
applications:
-- [ABAQUS](../../Scientific_Computing/Supported_Applications/ABAQUS.md#examples)
-- [GROMACS](../../Scientific_Computing/Supported_Applications/GROMACS.md#nvidia-gpu-container)
-- [Lambda Stack](../../Scientific_Computing/Supported_Applications/Lambda_Stack.md)
-- [Matlab](../../Scientific_Computing/Supported_Applications/MATLAB.md#using-gpus)
-- [TensorFlow on GPUs](../../Scientific_Computing/Supported_Applications/TensorFlow_on_GPUs.md)
+- [ABAQUS](../Supported_Applications/ABAQUS.md#examples)
+- [GROMACS](../Supported_Applications/GROMACS.md#nvidia-gpu-container)
+- [Lambda Stack](Lambda_Stack.md)
+- [Matlab](../Supported_Applications/MATLAB.md#using-gpus)
+- [TensorFlow on GPUs](TensorFlow_on_GPUs.md)
And programming toolkits:
-- [Offloading to GPU with OpenMP](../../Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenMP.md)
-- [Offloading to GPU with OpenACC using the Cray compiler](../HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md)
-- [NVIDIA GPU Containers](../../Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md)
+- [Offloading to GPU with OpenMP](Offloading_to_GPU_with_OpenMP.md)
+- [Offloading to GPU with OpenACC using the Cray compiler](Offloading_to_GPU_with_OpenACC.md)
+- [NVIDIA GPU Containers](NVIDIA_GPU_Containers.md)
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md b/docs/High_Performance_Computing/Batch_Computing/Job_Checkpointing.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md
rename to docs/High_Performance_Computing/Batch_Computing/Job_Checkpointing.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md b/docs/High_Performance_Computing/Batch_Computing/Milan_Compute_Nodes.md
similarity index 93%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md
rename to docs/High_Performance_Computing/Batch_Computing/Milan_Compute_Nodes.md
index bf9c53a6e..78974f35b 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md
+++ b/docs/High_Performance_Computing/Batch_Computing/Milan_Compute_Nodes.md
@@ -2,10 +2,7 @@
created_at: '2023-02-09T01:30:43Z'
tags: []
title: Milan Compute Nodes
-vote_count: 1
-vote_sum: 1
-zendesk_article_id: 6367209795471
-zendesk_section_id: 360000030876
+status: deprecated
---
## How to access
@@ -26,7 +23,6 @@ Alternatively, the same effect can be achieved by specifying in a Slurm script:
#SBATCH --partition=milan
```
-
## Hardware
Each node has two AMD Milan CPUs, each with 8 "chiplets" of 8 cores and
@@ -123,9 +119,7 @@ to try it:
module load AOCC
```
-For more information on AOCC compiler suite please, visit [AMD
-Optimizing C/C++ and Fortran Compilers
-(AOCC)](https://developer.amd.com/amd-aocc/)
+For more information on AOCC compiler suite please, visit [AMD Optimizing C/C++ and Fortran Compilers (AOCC)](https://developer.amd.com/amd-aocc/)
## Network
@@ -143,5 +137,5 @@ configuration is expected to be addressed in the future.
Don't hesitate to {% include "partials/support_request.html" %}. No question is
too big or small. We are available for Zoom sessions or
-[Weekly Online Office Hours](../../Getting_Started/Getting_Help/Weekly_Online_Office_Hours.md)
+[Weekly Online Office Hours](Weekly_Online_Office_Hours.md)
if it's easier to discuss your question in a call rather than via email.
diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Per_job_temporary_directories.md b/docs/High_Performance_Computing/Batch_Computing/Per_job_temporary_directories.md
similarity index 100%
rename from docs/Scientific_Computing/HPC_Software_Environment/Per_job_temporary_directories.md
rename to docs/High_Performance_Computing/Batch_Computing/Per_job_temporary_directories.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md b/docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Checking_your_projects_usage_using_nn_corehour_usage.md
similarity index 100%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md
rename to docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Checking_your_projects_usage_using_nn_corehour_usage.md
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md b/docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Fair_Share.md
similarity index 98%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md
rename to docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Fair_Share.md
index fc49d1361..0e0d1aa91 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md
+++ b/docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Fair_Share.md
@@ -19,7 +19,7 @@ Your *Fair Share score* is a number between **0** and **1**. Projects
with a **larger** Fair Share score receive a **higher priority** in the
queue.
-A project is given an [allocation of compute units](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
+A project is given an [allocation of compute units](../What_is_an_allocation.md)
over a given **period**. An institution also has a percentage **Fair Share entitlement**
of each machine's deliverable capacity over that same period.
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md b/docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Fair_Share_How_jobs_get_prioritised.md
similarity index 97%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md
rename to docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Fair_Share_How_jobs_get_prioritised.md
index f95e96aa3..c47ae25c5 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md
+++ b/docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Fair_Share_How_jobs_get_prioritised.md
@@ -19,7 +19,7 @@ Your *Fair Share score* is a number between **0** and **1**. Projects
with a **larger** Fair Share score receive a **higher priority** in the
queue.
-A project is given an [**allocation** of compute units](../../Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md)
+A project is given an [**allocation** of compute units](../What_is_an_allocation.md)
over a given **period**. An institution also has a percentage **Fair
Share entitlement** of each machine's deliverable capacity over that
same period.
@@ -155,7 +155,7 @@ request for projects that expect to use the cluster heavily on average,
can predict when they will need their heaviest use with a high degree of
confidence, and give us plenty of notice.
-For full details on Slurm's Fair share mechanism, please see [this
+For full details on Slurm's Fair share mechanism, please see
page](https://slurm.schedmd.com/priority_multifactor.html#fairshare)
(offsite).
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md b/docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Job_prioritisation.md
similarity index 92%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md
rename to docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Job_prioritisation.md
index 4811e8bfb..4644f8c08 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md
+++ b/docs/High_Performance_Computing/Batch_Computing/Project_Accounting/Job_prioritisation.md
@@ -31,7 +31,7 @@ jobs, but is limited to one small job per user at a time: no more than
Job priority decreases whenever the project uses more core-hours than
expected, across all partitions.
-This [Fair Share](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md)
+This [Fair Share](Fair_Share.md)
policy means that projects that have consumed many CPU core hours in the
recent past compared to their expected rate of use (either by submitting
and running many jobs, or by submitting and running large jobs) will
@@ -85,8 +85,8 @@ they get requeued after a node failure.
Cluster and partition-specific limits can sometimes prevent jobs from
starting regardless of their priority score. For details see the pages
-on [Mahuika](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md) or
-[Māui.](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Maui_Slurm_Partitions.md)
+on [Mahuika](../Mahuika_Slurm_Partitions.md) or
+[Māui.](../Maui_Slurm_Partitions.md)
## Backfill
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md b/docs/High_Performance_Computing/Batch_Computing/SLURM-Best_Practice.md
similarity index 93%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md
rename to docs/High_Performance_Computing/Batch_Computing/SLURM-Best_Practice.md
index fe5dca161..1573a8367 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md
+++ b/docs/High_Performance_Computing/Batch_Computing/SLURM-Best_Practice.md
@@ -43,7 +43,7 @@ etc).
### Memory (RAM)
If you request more memory (RAM) than you need for your job, it
-[will wait longer in the queue and will be more expensive when it runs](../../General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md).
+[will wait longer in the queue and will be more expensive when it runs](Why_is_my_job_taking_a_long_time_to_start.md).
On the other hand, if you don't request enough memory, the job may be
killed for attempting to exceed its allocated memory limits.
@@ -52,7 +52,7 @@ your program will need at peak memory usage.
We also recommend using `--mem` instead of `--mem-per-cpu` in most
cases. There are a few kinds of jobs for which `--mem-per-cpu` is more
-suitable. See [our article on how to request memory](../../General/FAQs/How_do_I_request_memory.md)
+suitable. See [our article on how to request memory](How_do_I_request_memory.md)
for more information.
## Parallelism
@@ -76,7 +76,7 @@ job array in a single command)
A low fairshare score will affect your jobs priority in the queue, learn
more about how to effectively use your allocation
-[here](../../Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share_How_jobs_get_prioritised.md).
+[here](Project_Accounting/Fair_Share_How_jobs_get_prioritised.md).
## Cross machine submission
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md b/docs/High_Performance_Computing/Batch_Computing/Slurm_Interactive_Sessions.md
similarity index 98%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md
rename to docs/High_Performance_Computing/Batch_Computing/Slurm_Interactive_Sessions.md
index f69e65dc2..4a1aa407b 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md
+++ b/docs/High_Performance_Computing/Batch_Computing/Slurm_Interactive_Sessions.md
@@ -14,7 +14,7 @@ you to use them interactively as you would the login node.
There are two main commands that can be used to make a session, `srun`
and `salloc`, both of which use most of the same options available to
`sbatch` (see
-[our Slurm Reference Sheet](../../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md)).
+[our Slurm Reference Sheet](Slurm-Reference_Sheet.md)).
!!! warning
An interactive session will, once it starts, use the entire requested
@@ -207,8 +207,7 @@ scontrol update jobid=12345678 StartTime=now
### Other changes using `scontrol`
There are many other changes you can make by means of `scontrol`. For
-further information, please see [the `scontrol`
-documentation](https://slurm.schedmd.com/scontrol.html).
+further information, please see [the `scontrol` documentation](https://slurm.schedmd.com/scontrol.html).
## Modifying multiple interactive sessions at once
diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md b/docs/High_Performance_Computing/Batch_Computing/Slurm_Partitions.md
similarity index 93%
rename from docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md
rename to docs/High_Performance_Computing/Batch_Computing/Slurm_Partitions.md
index 16b914b11..cfb65b2ba 100644
--- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md
+++ b/docs/High_Performance_Computing/Batch_Computing/Slurm_Partitions.md
@@ -23,7 +23,7 @@ they undertake to do so with job arrays.
## Partitions
-A partition can be specified via the appropriate [sbatch option](../../Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md),
+A partition can be specified via the appropriate [sbatch option](Slurm-Reference_Sheet.md),
e.g.:
``` sl
@@ -90,7 +90,7 @@ sbatch: `bigmem` is not the most appropriate partition for this job, which would
1850 MB |
460 GB |
2560 |
-Jobs using Milan Nodes |
+Jobs using Milan Nodes |
8 |
256 |
@@ -170,7 +170,7 @@ below for more info.
460 GB |
64 |
Part of
-Milan Nodes. See below. |
+Milan Nodes. See below.