Update for iCER scratch file system

Dear HPCC users,

We’re continuing to work on our scratch file system. We’ve improved our monitoring to identify when the Lustre system is overloaded and have helped users who were generating a large load on our metadata servers move to alternate platforms. We’re also adding capacity to the metadata server and updating to the newest version of Lustre during our January maintenance outage.

Users that do not require access to temporary files from multiple nodes can use $TMPDIR and /mnt/local, which are connected to each local node. For small files, this can be significantly faster, because it does not need to ensure that the files are consistent across the 700 nodes on the HPCC system, and can cache data in RAM on the compute node. See our pages on using $TMPDIR https://wiki.hpcc.msu.edu/x/koC0AQ and on our file systems: https://wiki.hpcc.msu.edu/x/eQKpAQ

We’re working to implement two new prototype systems; one that better serves small files to multiple nodes and is optimized for a high number of IO operations per second and a system that may have significantly better persistent storage performance. If you have any file system issues or are interested in beta testing these systems, please contact us:

1) Open a ticket: https://contact.icer.msu.edu
2) Come to the iCER Open Office Hour from 1 pm to 2 pm on Mondays and Thursdays in:

Institute for Cyber-Enabled Research
Biomedical & Physical Sciences Building
567 Wilson Rd, Room 1440
East Lansing, MI 48824

More information will be available on our website and next month’s newsletter.

Thank you for your patience!

Sincerely,

iCER team