The Extreme Science and Engineering Discovery Environment (XSEDE) is a single virtual system that scientists can use to interactively access computing resources from nations leading supercomputing centers. XSEDE resources range from High Performance Computing, High Throughput Computing, Visualization, Storage Resources, and Services.
As of July 2015, Georgia State University is a member of XSEDE as a Level 3 Service Provider.
Orion vs XSEDE: When to use Orion XSEDE
GSU’s Orion provides researchers immediate access and a better platform for development, but the resources that Orion offers may not be enough for a given large-scale problem. XSEDE provides vast resources that would allow a researcher to accomplish their goal, but the wait time before the job starts are longer. Therefore, Orion should be used for smaller jobs (not more than 120 cores), and XSEDE should be used for larger jobs. Also, Orion may be used for testing codes before scaling the jobs out to XSEDE resources.
Georgia State receives resource allocations through the XSEDE’s Campus Champion program. For a list of various resources available to Georgia State, click the link below. To get started with XSEDE, contact Suranga Edirisinghe or Semir Sarajlic.
Request XSEDE Account
In order to get access to XSEDE resources you must create an XSEDE User Portal (XUP) account.
Create an Account
Visit our documentation for getting started with Globus on XSEDE
Training & Documentation
||XSEDE New User Training
XSEDE Training Calendar
||Learn how to Request Allocations
Write a Successful XSEDE Research Allocation Proposal
XSEDE supports Single Sign On (SSO) login hub, which allows users to login to XSEDE portal and access individual XSEDE resources using their XSEDE username and password.
Accessing XSEDE Resources
To access XSEDE, use the command listed below, then use the commands provided in table to access individual XSEDE resources. If you do not have XSEDE account, then create an XSEDE account and get in touch with GSU Campus Champions (Semir Sarajlic or Suranga Edirisinghe).
|Resource Name||Resource Provider and Login Info||Resource Type||Resource Features|
|Blacklight||Pittsburgh Computing Center (PSC)
|SMP||Jobs requiring 1440 cores receive <=48 hours walltime. Jobs that require more than 1440 cores can be arranged too.|
|Gordon||San Diego Supercomputing Center (SDSC)
|Cluster||1024 compute nodes and 64 I/O nodes.|
|Open Science Grid (OSG)||Governed by the OSG Consortium
|Grid||High Throughput Computing (HTC)|
|Stampede|| Texas Advanced Computing Center (TACC)
gsissh -p 2222 stampede.tacc.xsede.org
|Cluster||Versatile system that provides 7+ PF of peak performance from Xeon Phi, and additional 2+ PF from Xeon E5 processors. System consists of large-memory nodes and graphics nodes for visualization and computation.|
|SuperMIC||Center for Computation & Technology (CCT) at (LSU)
gsissh -p 2222 supermic.cct-lsu.xsede.org
|Cluster||For large-scale computation; consists of 360 compute nodes and 20 hybrid compute nodes and 840 TB Lustre High Performance disk|
|Maverick|| Texas Advanced Computing Center (TACC)
gsissh -p 2222 maverick.tacc.xsede.org
|Cluster||For visualization of large-scale data; 512 GPUs with 14.5 TB aggregate memory|