Update on ICER Disk System

Dear HPCC users,

As you know, we've been moving all users over to a new file system, IBM's Spectrum Scale (previously called GPFS). The process of moving all files from the older disk system to the new home, ufs18, has been completed.

We have been working diligently with IBM to get all features of GPFS running the way we expected. We have had some difficulties but we are making progress. The AFM offsite replication is running much better now, more efficiently and taking less time. We've just got the quota system to make a full pass through the file system and are now able to run it regularly. This may have resulted in some dramatic changes in how much disk space you appear to be using.  Please let us know when you experience any issue with your home or scratch spaces via a ticket. Important information to share when you file a ticket includes: (i) when the issue occurred (the more precise, the better), (ii) where (ex. which nodes, or which jobs had the issue), (iii) what happened (ex. the job stopped updating a log file; a file appeared corrupted), and (iv) how (ex. Job was reading the input file for N replicas; each thread was writing an update to the same directory). We are working on a more efficient run of compression on our file system, which is presently very slow, so that we can be even more efficient about storing your files.

This has been a journey for the ICER HPC staff. However, they continue to make progress and we feel that we are in good shape. Thanks for your patience!


Bill Punch,
Associate Director
Institute for Cyber-Enabled Research