Child pages
  • Hyak-Quota notes
Skip to end of metadata
Go to start of metadata

Notes on Hyak Scratch Filesystem Quota Policies

The Basic Balkanization Problem:

Fileset quotas, as implemented on Hyak, allow each member of a group elastic access to storage capacity, up to the fileset quota. Implementing user-based quotas within filesets can have the effect of restricting each user to storage capacity less than the total fileset quota. If the intent of implementing user-based quotas is to prevent the group quota from being exceeded (and the inevitable disruptions that result), then the sum of user quotas must be less than or equal to the fileset quota.

For example, the eScience fileset (/gscratch/esci) quota is about 5,600 GB (5.6TB) and there are about 100 users in the eScience group. Dividing the fileset quota evenly among these users would result in each user having access to a maximum of 56GB of storage capacity. When executing jobs across hundreds of cores this restriction would often be unusably small. By instead relying solely on the shared fileset quota, all users in the group have up to entire 5.6TB available for their work.

Of course, for this to work, users must behave reasonably well, being conscious of the amount of data they write to /gscratch and cleaning up after themselves. And, with rare exceptions, Hyak users have done precisely this and everyone has been very productive. So, the question is, would you rather be very productive nearly all of the time with the risk of occasional disruption, or would you prefer to substantially limit your productivity all the time in return for the guarantee that disruptions would be even rarer?

Balkanization Secondary Effects:

If the quota policy is to divide a group's fileset quota evenly among all users in the group, then every time a user is added or removed from the group quotas must be adjusted. In the case where a user is removed from the group, failure to increase the quota assigned to all remaining users will result in some of the group's overall fileset quota being inaccessible. When a user is added, all existing users must have their quotas reduced, and in the event that a user was at or near his quota, this will push him over the new (reduced) limit, resulting in disruption to his work.

There are other quota policies than an equal division among all users, of course. In the case where the desire is to simply restrict a problem user's access, while leaving the well behaved users to share in the fileset quota as usual, some balkanization problems can be avoided.

The Collaboration Problem:

Many Hyak users are members of more than one group. Each group's fileset quota is distinct and tied to their /gscratch directory. User quotas, however, are associated with specific users system-wide. This means that setting a user's quota in an effort to address a storage management problem in one group would have the unintended consequence of limiting that user's access to storage capacity in the filesets associated with all his other group memberships.

Solutions - Social Pressure:

We advise groups to use social pressure to manage their shared storage. In our experience, nearly all users are responsible and will try to do the right thing. And, when they slip up and cause trouble, they're receptive to suggestions for avoiding future conflict. In the rare cases where users are slow to adapt, group managers have tools available to encourage them to improve their behavior: Specifically, their access to the cluster can be restricted through modification of their queue priority. Over the course of Hyak's first ~18 months of production operations this has never been necessary.

Solutions - Treating Scratch as Scratch:

In addition to the ~100GB/node of /gscratch capacity available to Hyak groups, they also have access to ~450GB of lolo Archive capacity per Hyak node. In a typical HPC workflow, users load initial conditions from an external source (could be lolo Archive) onto the cluster's internal scratch (/gscratch on Hyak). A calculation proceeds, producing some amount of data. Once the calculation and any subsequent analysis is complete, the user then moves the entire working directory to a data archive (lolo Archive in our case) for long term safe keeping, freeing the scratch space for the next user.

lolo also provides ~50GB/node of Collaboration space. This is a general purpose filesystem which is directly accessible from user's desktop workstations and laptops. In addition to sharing data with peers on and off campus, it is well suited as a data repository for post-run data analysis.

Solutions - Increase Capacity:

Dividing a group's pool of storage into small, separate pieces has the effect of reducing the storage available to each user. There is no way for the application of quotas to increase the storage capacity available to a group. Sometimes, the only way to address a group's storage requirements is to increase the available capacity. Within Hyak, one group already has 140TB of supplemental scratch space dedicated to its use. This pool of storage is accessible from every node in the cluster, in order to facilitate backfill use, but only the members of the relevant group have access. Two other Hyak groups are planing to add about 140TB and 280TB, respectively. The cost for this sort of storage expansion can be as little as ~$0.10/GB/year over six years.

What we can do today:

Today, we don't have a convenient means by which we can delegate user storage quota management to the group admins. In rare cases, we can apply user-based quotas to some users. For example, if you simply want to limit the filesystem access for a one or more users, we might be able to help. There are important restrictions as we describe in the section on the Collaboration problem.

What we can do in the future:

In an upcoming GPFS release, we will have the ability to assign user-based quotas on a per-fileset basis. This will address the Collaboration problem. At that point, if there is sufficient demand among Hyak groups, we might create a tool which allows group admins to manage these quotas. We can keep you posted on these developments if you like.

  • No labels