Skip navigation links

Digital File Storage

HPCC provides a variety of secure file storage options for research data and fast connections for high-speed file communication (I/O). Users have access to replicated high capacity storage that can be shared among group members or remain private to each user. Additionally, high performance storage is available for temporarily staging data that needs to be access quickly. Moreover, each compute node also has an attached local disk that can be used within a Hadoop on Demand for data intensive jobs. Storage options include:

See the User Documentation for more information.

Home Directory 

Personal data should be stored in /mnt/home/[MSU NetID]. Here, each user has their own 50 GB of space (more available upon request), which is backed up daily; a zfs snapshot is also taken hourly.

Research Space

A 1 TB block of space to be shared among members of a research group may be obtained if the group can provide adequate justification. Additional space may also be purchased. Files on this system, located at /mnt/research/[groupname], are also backed up daily. Both /mnt/home and /mnt/research have compression enabled. This allows better performance and enables significant space savings.

Local Disk

Between 100-250+ GB of local scratch space is available on each cluster node. This is native to each node and is not accessible from other nodes. This space is transient, volatile storage optimized for smaller-scale I/O. Files may be stored for a maximum of 8 days on /mnt/local. This space is regularly and routinely erased to ensure a maximum amount of free space for users.

Scratch Space

Scratch space is used for fast temporary storage while jobs are running. Using the scratch space allows users to exceed their disk quota for jobs that require lots of disk space to run. We recommend using the scratch space when a job is running and copying the results back to the users home directory. A user should store files here when a job requires those files to be accessible from all nodes during a computation.Users have a 1 million file quota and 50 TB quota on the /mnt/scratch and /mnt/ls15 space. Users needing higher limits may request increases via the contact form.

Unlike the home and research directories, the scratch space is not intended for long-term storage, and thus is not backed up. Files are automatically purged after 45 days. The parallel file system used for the scratch space is Lustre.

 

Small I/O Scratch Space

ffs17 is intended for fast temporary storage when running jobs/programs that generate or read many small files. This space is not backed up. Like /mnt/local or $TMPDIR, it is optimized for high I/O on a large number of small files but it can also be accessed from multiple nodes at the same time. ffs17 has a maximum storage capacity of 15 TB and users have a hard limit of 100 GB to store files on /mnt/ffs17/users or /mnt/ffs17/groups. Users needing higher limits may request increases via the contact form. Unlike scratch and home directories, you must create your own directory before running jobs. See Flash File System(link is external) for more information on creating a space, running jobs and changing file permissions on ffs17. 

 

Additional Storage Options
For MSU researchers, up to 1TB of secure storage can be allocated per PI for free. Backup of data files are stored off-site. Additional storage can be purchased at an annual rate of $125/TB. External buyers may be required to pay an addititional overhead charge. Please contact(link is external) iCER about current rates.

To increase your storage quota up to 1TB, please complete the Quota Increase Request Form.

To make a storage increase request beyond the first terabyte, please complete the Large Quota Increase Request Form.