The Center Mr Scanners Contact Us Diagnostic Imaging Services Patient Information Referring Physicians Site Diagnostic Services Research McLean Imaging Center Applied Neuroimaging Statistics Laboratory CCNC fNIBI MIND Neurodevelopmental Lab on Addictions and Mental Health (NLAMH) Opto-Magnetic Group (OMG) Translational Imaging Translational Imaging Collaborations—Mclean Publications Publications Education N.I.D.A. T32 N.I.H. K Awards Northeastern CoOp MRI Techonlogist Training  

Collaborations Non-Mclean

Daniel M. Drucker, Ph.D., Director of IT

Computing Facilities

Primary data analysis for most projects is performed on investigators' personal computers (Mac OS X, Linux, and Windows) or our computational cluster. In addition, the MIC provides support infrastructure for specialized data processing needs, consisting of a variety of Linux systems and networked printers. All machines are networked to the hospital wide Ethernet or WiFi.  Data storage is provided by an iXsystems Truenas file server, accessible via SMB and NFS, totaling ~1.5 PB; primary image storage is via Orthanc and XNAT servers. Backups are made daily to a similar Truenas file server, offsite at the Marlborough Data Center.

The MIC computation cluster, based on Red Hat Linux, Bright Cluster Manager and SLURM, consists of a PowerEdge R650xs (Ice Lake, Xeon Gold 6326 16C/32T) head node with 256 GB RAM and four compute nodes, each a PowerEdge R750XA (Ice Lake, Xeon Gold 6342, 24C/48T) with 512 GB RAM. Two of the compute nodes each have two NVidia A40 GPUs with 48 GB.

The cluster has key software used at the MIC such as fMRIPrep, FSL, SPM, AFNI Freesurfer and ANTs for fMRI processing. These packages support SLURM clusters and achieve a speedup for large jobs proportional to the number of cores in the cluster for large jobs. The cluster also hosts the full Human Connectome Project dataset, for use in hypothesis testing and generation.

Offsite Clusters

For larger jobs, MIC researchers have access to both 1) The Harvard Medical School's O2 computational cluster, a shared, heterogeneous High Performance Compute facility which includes 350+ compute nodes, 11,000+ compute cores, assorted GPU cards, and more than 50TB of memory using a SLURM scheduling system, and 2) The Partners Healthcare ERISOne cluster has over 380 compute nodes, 7000 CPU cores and a total of 56TB RAM memory and 5PB's of storage in addition to specialized parallel processing resources including GPUs, using an LSF scheduler. All fMRI processing tools available on the MIC cluster are installed on both clusters, with the addition of fmriprep.