Dear Grex/HPC User,
The Digital Research Alliance of Canada, and our UManitoba HPC system, Grex, are proceeding with the ongoing implementation of a multifactor authentication (MFA) system. MFA adds an additional layer of security to the traditional password-based (or SSH keys based) authentication by requiring a second factor, known as "something you have." Grex is using the Cisco Duo instance from the Alliance as the second factor authentication. We have successfully completed the first phase of MFA testing for staff and early adopter users.
As of now, every Alliance and Grex user has an option to register a device in CCDB. As our next stage in MFA adoption, we are going to temporarily enforce the use of MFA for all GREX users between Dec 04 and 11, 2023 . All authentication attempts to Grex during this week will require the second factor. We would like to make MFA mandatory for all GREX users in January 2024.
Note that the second factors used by the Alliance are not entirely the same as those used by the University of Manitoba for platforms such as UM Intranet and Exchange. On Grex and Alliance systems, the following factors are enabled:
● Duo smartphone app (Android and iOS)
● Yubico Yubikey cybersecurity USB key device
● 10 one-time codes (recommended as a backup 2FA in addition to the primary device)
Enrollment into the Alliance Duo is through CCDB: https://ccdb.alliancecan.ca<https://ccdb.alliancecan.ca/> . A successful Enrollment enables the MFA requirement on every SSH login on both Grex and Alliance systems like Cedar, Graham, Narval or Niagara. The following Grex Documentation page explains the process of the device enrollment with screenshots: https://um-grex.github.io/grex-docs/connecting/mfa/
Our staff will be available for support in case you have a difficulty to enroll or use MFA.. If you have any questions, please let us know (by an email to support(a)tech.alliancecan.ca<mailto:support@tech.alliancecan.ca> , mentioning “Grex” in the subject line). Thank you for your patience and thank you for your attention to this message!
--
Your Grex HPC team.
Dear Grex PI,
In case your group was interested in applying to our local Resource allocation call: the deadline for it comes this Wednesday, November 22.
Thanks!
--
Grigory Shamov
Site Lead / HPC Specialist
University of Manitoba and DRI Alliance Canada
From: Advanced Research Computing <ARC(a)umanitoba.ca>
Date: Wednesday, November 1, 2023 at 9:35 AM
To: "grex-pi(a)lists.umanitoba.ca" <grex-pi(a)lists.umanitoba.ca>
Subject: [grex-pi] Time to renew Grex HPC resource allocations ! Local RAC call starts today.
Dear Grex / HPC Users,
Thank you all for using our HPC resources! Another year passed since our last call.
It is time to renew the local resource allocations for 2023/2024.
Please find the Local Resource Allocation Call 2023/24 forms and conditions attached
Your HPC Team.
===========
Overview
Grex is a High-Performance Computing system of the University of Manitoba. Its current compute capacity consists of three parts: legacy compute nodes, new compute nodes purchased in 2020 and 2021, and researcher-contributed nodes. There is also now partially community-funded storage on Grex.
This call is for renewing and updating the allocation of the resources of the Grex HPC system (CPU time in Core Years (CY) and storage in TBs. The generally available hardware is as follows:
New compute nodes added in 2020 and 2021
* two 4x NVIDIA V100 GPU nodes, Intel 5218 Cascade Lake, 192GB RAM
* one 16x NVIDIA V100 HGX-2 node 48 core Intel CPU, 1.5TB of RAM
* 12 of 40 core Intel CascadeLake 6248 2.5GHz compute nodes, 384GB RAM
* 42 of 52 core Intel CascadeLake 6230R 2.1GHz compute nodes, 192GB RAM
Legacy compute nodes from 2011
* About 200 remaining original compute nodes, 12 cores 2.67GHz Nehalem processors each with 48GB RAM
The Project storage, allocatable per research group, has total space of 2 PB, with 1.7TB targeted for the contributing Faculties of Agriculture, Engineering, and Science.
* The /project space is allocated by default 5 top 20 TB per group, with larger allocations possible through this Call. Research groups from the contributor Faculties will be guaranteed a preference in the allocation of the /project storage quota according to their contributions.
In this round of resource allocations, we call for proposals to allocate the current compute nodes (a total of 2716 CPU cores in 55 CascadeLake nodes) and current 2PB Project storage.
* General GP GPU nodes are available on a first-come, first-served basis. Please indicate if you might need GPUs in your proposal.
* The legacy 316 compute nodes will likely be decommissioned in the near future. Until that, they are available on a first-come, first-served basis. Please indicate if you need these nodes in your proposal.
* Contributed GPU and CPU nodes are not allocatable, but they are available for opportunistic, preemptible use when not used by their contributors.
Proposals for the use of Grex resources will be reviewed by the Advanced Research Computing (ARC) Committee. They will also be reviewed by Grex technical staff to ensure that Grex resources will be used appropriately and efficiently.
Categories of Resource Requests
The request for ARC resources must come from a UManitoba PI. There are two (2) categories of resource requests:
1. Rapid Access Service (RAS) is limited to less than 50 core-years, 2 GB memory/core, and 5 to 20 TB of Project storage per research group. RAS requests do not require a proposal For information and access please email arc(a)umanitoba.ca<mailto:arc@umanitoba.ca>
2. Resource Allocation Call (RAC). For the annual allocation of Grex resources, we ask users who need more core years or storage than the RAS limits above to request the resources via an RAC proposal. The proposals must be submitted in the format described below.
Proposal Format
Please use the attached 2023 RAC Request Template for your proposal. You may remove all explanations (text in italics) if desired.
This template attached is intended as a guide. Please remove any section or sub-section that does not apply to your application. Maximum lengths for each section are included in the template, but your proposal may use (much) less space depending on the size and complexity of the request.
Proposals must be sent by email to ARC(a)umanitoba.ca<mailto:ARC@umanitoba.ca> by 4:30 pm November 22, 2023.
Dear Grex / HPC Users,
Thank you all for using our HPC resources! Another year passed since our last call.
It is time to renew the local resource allocations for 2023/2024.
Please find the Local Resource Allocation Call 2023/24 forms and conditions attached
Your HPC Team.
===========
Overview
Grex is a High-Performance Computing system of the University of Manitoba. Its current compute capacity consists of three parts: legacy compute nodes, new compute nodes purchased in 2020 and 2021, and researcher-contributed nodes. There is also now partially community-funded storage on Grex.
This call is for renewing and updating the allocation of the resources of the Grex HPC system (CPU time in Core Years (CY) and storage in TBs. The generally available hardware is as follows:
New compute nodes added in 2020 and 2021
* two 4x NVIDIA V100 GPU nodes, Intel 5218 Cascade Lake, 192GB RAM
* one 16x NVIDIA V100 HGX-2 node 48 core Intel CPU, 1.5TB of RAM
* 12 of 40 core Intel CascadeLake 6248 2.5GHz compute nodes, 384GB RAM
* 42 of 52 core Intel CascadeLake 6230R 2.1GHz compute nodes, 192GB RAM
Legacy compute nodes from 2011
* About 200 remaining original compute nodes, 12 cores 2.67GHz Nehalem processors each with 48GB RAM
The Project storage, allocatable per research group, has total space of 2 PB, with 1.7TB targeted for the contributing Faculties of Agriculture, Engineering, and Science.
* The /project space is allocated by default 5 top 20 TB per group, with larger allocations possible through this Call. Research groups from the contributor Faculties will be guaranteed a preference in the allocation of the /project storage quota according to their contributions.
In this round of resource allocations, we call for proposals to allocate the current compute nodes (a total of 2716 CPU cores in 55 CascadeLake nodes) and current 2PB Project storage.
* General GP GPU nodes are available on a first-come, first-served basis. Please indicate if you might need GPUs in your proposal.
* The legacy 316 compute nodes will likely be decommissioned in the near future. Until that, they are available on a first-come, first-served basis. Please indicate if you need these nodes in your proposal.
* Contributed GPU and CPU nodes are not allocatable, but they are available for opportunistic, preemptible use when not used by their contributors.
Proposals for the use of Grex resources will be reviewed by the Advanced Research Computing (ARC) Committee. They will also be reviewed by Grex technical staff to ensure that Grex resources will be used appropriately and efficiently.
Categories of Resource Requests
The request for ARC resources must come from a UManitoba PI. There are two (2) categories of resource requests:
1. Rapid Access Service (RAS) is limited to less than 50 core-years, 2 GB memory/core, and 5 to 20 TB of Project storage per research group. RAS requests do not require a proposal For information and access please email arc(a)umanitoba.ca<mailto:arc@umanitoba.ca>
2. Resource Allocation Call (RAC). For the annual allocation of Grex resources, we ask users who need more core years or storage than the RAS limits above to request the resources via an RAC proposal. The proposals must be submitted in the format described below.
Proposal Format
Please use the attached 2023 RAC Request Template for your proposal. You may remove all explanations (text in italics) if desired.
This template attached is intended as a guide. Please remove any section or sub-section that does not apply to your application. Maximum lengths for each section are included in the template, but your proposal may use (much) less space depending on the size and complexity of the request.
Proposals must be sent by email to ARC(a)umanitoba.ca<mailto:ARC@umanitoba.ca> by 4:30 pm November 22, 2023.