• image
  • image
  • image
  • image
  • image
  • image
  • image
  • image
Previous Next

Welcome To The Transportation Research And Analysis Computing Center (TRACC)

Chartered in 1946 as the nation's first national laboratory, Argonne enters the 21st century focused on solving the major scientific and engineering challenges of our time: sustainable energy, a clean environment, economic competitiveness and national security. Argonne is pursuing major research initiatives that support the U.S. Department of Energy's goals to create innovative and game-changing solutions to national problems, including state-of-the-art transportation research.

READMORE

Featured Story

Argonne researchers to study Chicago emergency evacuation system

How best to evacuate a major city? The Federal Transit Administration has awarded Argonne a $2.9 million grant to study methods and create tools for building more resilient mass transit systems to evacuate big cities in an emergency. Read the full story with audio/video clips.

Important News Concerning System Upgrades

April 6, 2016

General Phoenix Upgrade
- This is an important notice concerning planned upgrades to the Transportation Research and Analysis Computing Center's clusters: Phoenix and Zephyr. The upgrades will be completed in phases. Some upgrades are already in the process of being made while others are still in the early stages of consideration. These changes will require some changes to your user code in order to run properly. The changes will primarily be made to Phoenix, and we plan to keep a pool of nodes running as they are now so that your existing code will continue to run on these nodes. As each phase is implemented, the pool of unchanged nodes will diminish until the point where all nodes will be converted to the new hardware/software configuration.

The first main change to Phoenix will be to change out the disk storage system. The initial configuration ran IBM's GPFS storage software on a DDN hardware system. The plan is to replace the entire GPFS/DDN file system with 45 Seagate 4 TeraByte disks housed in a SuperMicro disk controller chassis. We will run the Gluster network file system on top of ZFS with RAIDZ3 for redundancy. This should provide 180 TeraBytes of raw storage that translates into approximately 130 TeraBytes of user space.

Another SuperMicro chassis with two processors totaling 12 cores will serve the file system as well as host virtual machines for cluster administration and monitoring. We will be installing the hardware for the disk storage system in mid-April. The installation should be transparent to our users although there may be one or more short downtimes when we are cutting over to the new storage system.

In late April we are planning to start to replace the RedHat operating system currently running on our nodes with the latest version of CentOS (CentOS 7.2). We plan to do this process in stages taking groups of nodes with the new OS and putting them in new queues. The new queues will run CentOS and use the new cluster file system. The other queues will continue to run RedHat and use the old GPFS file system. Thus, existing user software should continue to run on the RedHat-based nodes without any changes. Over time, we will continue to migrate pools of nodes to CentOS and Gluster until all RedHat nodes have been migrated. At that time, the old GPFS-based file system will be shut down. We expect the timing will be such that users will have sufficient time to make changes to their code so that it will be able to run efficiently under the new OS and on the new file system.

NHTSA Funded Upgrade- After the above upgrades are in place and running (May timeframe), we are considering adding a new set of nodes to Phoenix. This would add 10 nodes each with two 14-core Intel Xeon processors. These nodes should be at least twice as efficient as the old AMD nodes. The nodes will run in a special queue and will utilize the same file storage system described above. This queue will be dedicated for use by one of our clients (NHTSA) and be optimized to run LS-DYNA programs.

We plan to keep this article up to date as the upgrade proceeds, so try to check back to this web site periodically to track the latest changes. We also plan to rewrite the TRACC Wiki, and you will be able to find more detail there.