Skip to content

Research Computing Services

Schedule a Consultation

If you need personalized expertise to improve your lab workflow, design one of your projects, or fill in compute or data storage requirements for a grant, you can schedule a consultation with our team of experts for a personalized recommendation.

Blue bar with the words "Research General Request/Consultation"


Research Facilitation

Our goal is to make your research efficient, reproducible, and accessible. Our experts are here to provide you with a range of services to help your research succeed. Our research facilitation service includes providing you with resources and knowledge of IT capabilities and helping you with the technical writing for grants or publishing.

We take the time to understand your work to ensure the scalability and sustainability of your project. We can help you with Version Control, Code testing and optimization, collaboration, software citation, and more. 

Blue bar with the words "Research General Request/Consultation"


HPC Zoom Clinics

Bring your laptop, your code, and your questions to the HPC walk-in clinic and get expert help right on the spot. Experienced graduate students are encouraged to come help their peers by mentoring them in HPC tips and tricks. Faculty are also welcome to join. Among others, Yue Yu will be available to meet with and help campus research computing community members at these sessions.

WHERE 
Zoom, Password: 895006
WHEN 
Every Friday from 11:30 am – 1:00 PM 


MERCED cluster core-hours

Description: 

MERCED (Multi-Environment Research Computer for Exploration and Discovery) Cluster is a 2096 core Linux-based high-performance computing cluster that was previously supported by the National Science Foundation Award ACI-1429783. The MERCED cluster runs with the CentOS operating system, a standard flavor of Linux, and employs the Slurm job scheduler and queueing system to manage job runs. Researchers can request compute cycles on MERCED clusters for research.

Cost:

  OIT-CIRT recharge services, including MERCED cluster core hours will be renewed starting Feb. 2023

To minimize disruptions to computational research on MERCED cluster, the Provost’s office has provided bridge funding for all MERCED cluster PIs for core-hour usage on MERCED through June 30, 2024.

Faculty PIs: Please ensure that the user accounts are active and provide the COA# here to use the MERCED recharge service. 

The MERCED computational cycle accounting system is based on core hours: one core-hour(1) represents a single compute core(2) used for one hour (a core-hour) and 2G of RAM. The total Cost in core-hours for a complete computation is:

Total Cost ($) = Number of cores x Job duration in wall clock hours x Billing rate per core-hour

(1) used for one hour (a core hour) and 2G of RAM

(2)A core is an individual processor: the part of a computer that actually executes programs. An HPC system like MERCED cluster contains about 3100 cores.

The UC rate per core hour is $0.01, and External/Non-UC rate per core hour is $0.02 

Please visit the New User Account - MERCED Cluster and Other Research Computing Systems to request an account.


Enhanced User Support

Description:

Researchers can request additional support to resolve hardware, application, or maintenance issues on a faculty-managed machine.

Linux/Unix: The CIRT team supports research computing servers with Linux/Unix OS that have not reached End of Life (EOL).  
Windows: The CIRT team supports research computing servers with Windows OS that have not reached End of Life (EOL).

Cost:

The services are billed per hour. The UC rate is $71.14 / hour. The external/Non-UC rate is $108.2 / hour. 

Blue bar with the words "Research General Request/Consultation"


Software Installations

If the software package or version you need is unavailable in the provided software list, you may compile and install it yourself. The recommended location for user-installed software is the $HOME path. 

If the software installation requires root (sudo) access and/or if you need further assistance installing the software, you can request service here.


Data Management

Managing research data can be challenging, especially when the data are large or sensitive (e.g., HIPAA). Through a partnership with the University Library, Research IT can help! We offer services in developing a Data Management Plan through the use of the DMPTool (see DMPTool.org), and we can assist in specifying application-specific storage solutions for purchase. Request a consultation here. If you require letters of compliance to gain access to sensitive data, or if your research requires you to safeguard sensitive data, we can help you navigate the complex regulatory environment to issue you letters of compliance so you can get started. See Research Data Management Consultation for more information.

Blue bar with the words " Request Data Management Consultation"


Service Level Agreements

Research Server Colocation

Researchers can request colocation services for installation, management, and hosting of faculty-owned computational hardware in CIRT-managed data center locations in SE-1, SE-2, SSM, COB-2, and Computational Research Facility (Borg Cube).

This service is for active faculty and research groups (including graduate students) who require colocation services to install, manage and host faculty-owned computational hardware.

Cost: The services are billed for the services, and CIRT support level requested 

Blue bar with the words "Research General Request/Consultation"


Active Research Data Storage

Researchers can request storage for their active primary research data. Designed to be robust, reliable, and easy to access for users on MERCED cluster, this storage is recommended for the following:

• Data storage for active research workflows on MERCED cluster
• Fast read-write data access to and from MERCED cluster compute nodes 

This service is for active faculty and research groups (including graduate students) who require colocation services to install, manage and host faculty-owned computational hardware.

Cost: Active data storage is $0.05/GB/year (startup funds) and $0.06/GB/year (non-startup funds).

Blue bar with the words "Research General Request/Consultation"


Charge Table 

  UC rate External/ Non-UC rate
MERCED cluster core-hours $0.01 per core-hour $0.02 per core-hour
Enhanced User Support $71.14 per hour $108.2 per hour
  Startup Fund Non-Startup Fund
Research Server Colocation The services are billed for the services, and CIRT support level requested
Active Research Data Storage $0.05 per GB per year $0.06 per GB per year

How to Cite

If you publish research that benefited from the use of CIRT services or resources, we would greatly appreciate an acknowledgment that states:This research [Part of this research] was conducted using [MERCED cluster (NSF-MRI, #1429783) / Pinnacles (NSF MRI, # 2019144) / Science DMZ (NSF-CC* #1659210)] at the Cyberinfrastructure and Research Technologies (CIRT) at University of California, Merced. 


Facilities Statement

The  MERCED Cluster (Multi-Environment Research Computer for Exploration and Discovery Cluster) is a Linux cluster from Advanced Clustering Technologies, funded by an NSF Major Research Instrumentation grant.  It has:
  - 100+ nodes with between 20 and 42 cores, from 128G to 256G per machine for a total of 2000+ core an 18T of RAM.
  - 6 GPU Nodes with 12 NVIDIA Tesla P100
  - 5 storage nodes for a total of 350TB
All above nodes are interconnected via infiniband w/ RDMA for fast (25Gbits/s) and for low latency (sub ms) data transfer.

The Science DMZ is a campus-wide dedicated 1-10G network with direct connections to CENIC’s High-performance Research Network; the Pacific Research Platform (PRP); and other regional, national and even international networks. The Science DMZ also hosts three dedicated Fast Input/Output Network Appliances (FIONAs) data transfer nodes. These FIONAs provide over 250TBs of data disks that can be used for the staging, moving and sharing of large data sets. The PRP provides a uniform and configuration-managed suite of FIONAs across all 10 UC campuses, allowing for seamless and nearly effortless data transfers. The Science DMZ also hosts two (2) additional FIONA8s, which combine the abilities of the traditional data transfer nodes with at least 8 high-end GPUs, which allow researchers to host the data required for machine learning techniques. This world-wide system, known as Nautilus, is available to researchers in any discipline by request. More information on location and other details is available here. The Science DMZ is support by NSF Award #1659210. The PRP is supported by NSF Grant #1541349 under the CC*DNI DIBBs program.

The Wide Area Visualization Environment (WAVE) is constructed of 20 4K (Ultra HD), stereoscopic Organic Light Emitting Diode (OLED) displays tiled together in a 5 x 4 half-pipe matrix. These OLED displays are capable of displaying over 1 billion distinct colors (compared to standard displays of around 1.7 million) and a contrast ratio of 200,000:1. The system is driven by 10 Dual Tesla GPU compute nodes with several TB ultra-fast SSD Drives to work with multi TB data sets. In addition to the stunning 2D/3D imagery, the system is connected to the ScienceDMZ at 10Gbps (and to the rest of campus at 40 Gbps), so multi-site collaborations are possible. While each individual part of the WAVE (e.g. the displays, the GPUs, the network) is actually commodity hardware, the sophisticated engineering and open source software it runs to make the WAVE and its kind a unique research tool.

The WAVE Lab has a system of Space and compute reservations.

All compute systems are usually running jobs using an automatic scheduler. Still, it is possible to place an exceptional reservation to ensure the compute, network, and storage resources will be available during a specific time slot.