Skip to content Skip to navigation

Research Computing Services

Schedule a Consultation

If you need personalized expertise to improve your lab workflow, design one of your project, or fill in compute or data storage requirement for a grant, you can schedule a consultation with our team of experts for a personalized recommendation.


Research Facilitation

It is our goal to make your research efficient, reproducible, and accessible. Our expert are here to provide you with a range of services to help your research succeed. Our research facilitation service includes providing you with resources and knowledge of IT capabilities, as well as helping you with technical writing for grants or publishing.

We take the time to understand your work in order to ensure the scalability and sustainability of your project. We can help you with Version Control, Code testing and optimization, collaboration, software citation, and more. 


HPC Walk-In Clinics

Bring your laptop, your code, and your questions to the HPC walk-in clinic and get expert help, right on the spot. Experienced graduate students are encouraged to come help your peers by mentoring them in HPC tips and tricks. Faculty are also welcome to join. Among others, Sarvani Chadalapaka (MERCED System Administrator) and Matthias Bussonnier (Research Computing Facilitator) will be available to meet with and help members of the campus research computing community at these sessions.

WHERE 
ACS 306
WHEN 
Every Friday @ 10:30 am – 12 pm 


Facilities Statement

The MERCED cluster (Multi-Environment Research Computer for Exploration and Discovery), a shared resource for UC Merced researchers, is a Linux cluster from Advanced Clustering Technologies, funded by an NSF Major Research Instrumentation grant. There are 95 compute nodes with a total of 2116 cores at 2301 MHz, including 4 GPU nodes, running nVidia K20 graphics cards and 2 GPU nodes running nVidia P100’s. Total capacity is approximately 62 TFLOPS. The cluster runs Slurm scheduler. There is 71 TB disk space across the cluster itself, plus a main storage array of  164 TB and approximately 144TB for project-specific storage. 

The Science DMZ is a campus-wide dedicated 1-10G network with direct connections to CENIC’s High-performance Research Network; the Pacific Research Platform (PRP); and other regional, national and even international networks. The Science DMZ also hosts three dedicated Fast Input/Output Network Appliances (FIONAs) data transfer nodes. These FIONAs provide over 250TBs of data disks that can be used for the staging, moving and sharing of large data sets. The PRP provides a uniform and configuration-managed suite of FIONAs across all 10 UC campuses, allowing for seamless and nearly effortless data transfers. The Science DMZ also hosts two (2) additional FIONA8s, which combine the abilities of the traditional data transfer nodes with at least 8 high-end GPUs, which allow researchers to host the data required for machine learning techniques. This world-wide system, known as Nautilus, is available to researchers in any discipline by request. More information on location and other details is available here. The Science DMZ is support by NSF Award #1659210. The PRP is supported by NSF Grant #1541349 under the CC*DNI DIBBs program.


Data Management

Managing research data can be challenging, especially when the data are large or sensitive (e.g. HIPAA). Through a partnership with the University Library, Research IT can help! We offer services in developing a Data Management Plan through the use of the DMPTool (see DMPTool.org), and we can help in specifying application specific storage solutions for purchase. Request a consultation here. If you require letters of compliance in order to gain access to sensitive data, or if your research requires you to safeguard sensitive data, we can help you navigate the complex regulatory environment with the goal of issuing you letters of compliance so you can get started. See Research Data Management Consultation for more information.