Supercomputer

Using uv

uv has ended the operation in December 2014. The following is old information.

SGI UV 100 (uv) is a distributed shared memory machine equipped with a 128 CPU cores (16 sockets of Intel Xeon E7 8837) with 2 T bytes main memory. This machine is available in the user can be used as long as permitted by the resource because there are no restrictions on the use of job execution time and memory.
Operating system, we have adopted the standard Red Hat Enterprise Linux on 64-bit Linux, which is based on Linux kernel 2.6 has been adopted in common Shirokane2.

System ConfigurationDevice NameCPUMemory
Shared-memory serverSGI UV 100128 CPU cores (Intel Xeon E7 8837 2.66 GHz)2 TB

Analysis Tools Proven

RAM Disk and Disk for Scratch

RAMDISK of 1 TB ( /dev/shm ) is available with uv. If the disk I/O is the bottleneck in existing analysis tools, can be easily faster without making complex changes to the program that you use this area. In addition, because there is 12 TB scratch disk in uv, can be run without problems for the large number of temporary files to the output such as NGS analysis tools. There is no quota limitation, These area are freely available.

How to Use uv?

To use the uv, please select the add uv of common option of application form for Use of the HGC Supercomputer System.

Application Form for Use of the HGC Supercomputer System
  • Excel Format Application Form

Available from both systems, Shirokane2 Shirokane1.
The user of the following group courses, able to use our uv without procedures "Adding uv".

How to Login uv?

To login uv, you must login login-nodes (gw.hgc.jp, ngw.hgc.jp) first, after login to uv. You can not use the uv from the login-nodes.


user's pc % ssh gw.hgc.jp -l username
Password:
gw.hgc.jp % ssh uv
Password:
uv %

Disk quota and home directory of uv are the same for normal compute-nodes, you do not need to copy of your data nor to add the new disk. Uv also mount the home directory of both, Shirokane1 and 2.

For details on how to login to the login-node for each OS, please refer to here

Job Management System Dedicated uv

To run a job in uv, please use Univa Grid Engine (UGE) as job management system for uv. UGE of uv is independent of UGE of login-nodes (ngw and gw). The usage is like the same as that of the login-nodes. After entering the uv, please submit jobs to this UGE dedicated to uv. There is only queue of one type(batch.q), there are no restrictions on the use of time and memory to run UGE. Because for just one queue, nor aware of the queue and the queue, there is no detailed specification options to suit the characteristics of the job, the job is easy.

Execution example


uv % qsub  simple.sh
Your job 915 ("simple.sh") has been submitted
uv % qstat -f
queuename                      qtype resv/used/tot. load_avg arch          states
---------------------------------------------------------------------------------
batch.q@uv                     BIP   0/1/128        26.07    Xeon          
    915 1.05500 simple.sh  sgiadm       r     12/11/2012 11:09:50     1        
---------------------------------------------------------------------------------
test.q@uv                      BIP   0/0/1          26.07    Xeon          
uv %

Introduction of Specific Commands uv

HTC_Bio

When specified as a query sequence files that multiple sequences have been combined into alignment tool BLAST, BLAST +, such as FASTA, perform calculations at the same time the number of the specified parallel arrays of each split for each array in a separate file, once I want to output the results to a specified directory.


HTC_Bio version 4.0
-------------------

USAGE: HTC_Bio   -exec  

where the  are:

    -in_dir [.] -out_dir [in_dir.pid] -in_type [fasta] -cluster  -ncpu <# CPUs> [max] -verbose -collate -standalone -standalone_with_fasta_input

and the  are:

    BLAST:	-block [blocksize] -sort -phiblast -wublast -stdout
    FASTA:	-block [blocksize] -sort -db  -db_type [0] -ktup [protein=2;dna=6] -stdout
    CLUSTALW:	-noinfo
    HMMER:	-block [blocksize] -sort -db  -stdout
    WISE2:	-db  -sort -stdout
    STANDALONE:	-block [blocksize] -sort -stdout

Please see the README file for more information.

In addition, if you want to run in parallel mapping software Bowtie, SOAP, BWA, SOAP, etc. Please use the pmap command. pMAP is not a uv specific command.

dplace

uv is composed by eight nodes distributed shared memory machine. Processes are scheduled in a low load on the node's CPU, memory that is physically close to the CPU is allocated. In this case, with reference to the memory of the local node, then when the process moves from node to node less expensive it will be to refer to the memory of the remote node. Remote access is likely to decrease the performance of the process. dplace command is used to bind to a specific CPU migration process. With this improved performance, the proportion that refer to memory nearby.

enter the command followed after dplace.


uv % dplace bash
dplace to capture the process, you will know that you are bound to CPU # 0.

uv % dplace -qqq
-------------------- Active Dplace Jobs -------------
key          tasks   owner                       pid     cpu    name
0x000003f5       1   sgiadm                    62640       0    bash
uv %

cpumap

Execution example

uv % cpumap
uv

This an SGI UV
model name           : Intel(R) Xeon(R) CPU E7- 8837 @ 2.67GHz
Architecture         : x86_64
cpu MHz              : 2666.863
cache size           : 24576 KB (Last Level)

Total Number of Sockets             	: 16
Total Number of Cores               	: 128	(8 per socket)
Hyperthreading                      	: OFF

UV Information
 HUB Version: 				 UVHub  2.0
 Number of Hubs: 			 8
 Number of connected NUMAlink ports: 	 0
=============================================================================

Hub-Processor Mapping

  Hub Location      Processor Numbers -- HyperThreads in ()
  --- ----------    ---------------------------------------
    0 r001i21b00       0    1    2    3    4    5    6    7    8    9   10   11   12   13   14   15
    1 r001i21b01      16   17   18   19   20   21   22   23   24   25   26   27   28   29   30   31
    2 r001i24b00      32   33   34   35   36   37   38   39   40   41   42   43   44   45   46   47
    3 r001i24b01      48   49   50   51   52   53   54   55   56   57   58   59   60   61   62   63
    4 r001i27b00      64   65   66   67   68   69   70   71   72   73   74   75   76   77   78   79
    5 r001i27b01      80   81   82   83   84   85   86   87   88   89   90   91   92   93   94   95
    6 r001i30b00      96   97   98   99  100  101  102  103  104  105  106  107  108  109  110  111
    7 r001i30b01     112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127

=============================================================================

Processor Numbering on Socket(s)

  Socket    (Logical) Processors
  ------    -------------------------
     0      0    1    2    3    4    5    6    7
     1      8    9   10   11   12   13   14   15
     2     16   17   18   19   20   21   22   23
     3     24   25   26   27   28   29   30   31
     4     32   33   34   35   36   37   38   39
     5     40   41   42   43   44   45   46   47
     6     48   49   50   51   52   53   54   55
     7     56   57   58   59   60   61   62   63
     8     64   65   66   67   68   69   70   71
     9     72   73   74   75   76   77   78   79
    10     80   81   82   83   84   85   86   87
    11     88   89   90   91   92   93   94   95
    12     96   97   98   99  100  101  102  103
    13    104  105  106  107  108  109  110  111
    14    112  113  114  115  116  117  118  119
    15    120  121  122  123  124  125  126  127

=============================================================================

Sharing of Last Level (3) Caches

  Socket    (Logical) Processors
  ------    -------------------------
     0      0    1    2    3    4    5    6    7
     1      8    9   10   11   12   13   14   15
     2     16   17   18   19   20   21   22   23
     3     24   25   26   27   28   29   30   31
     4     32   33   34   35   36   37   38   39
     5     40   41   42   43   44   45   46   47
     6     48   49   50   51   52   53   54   55
     7     56   57   58   59   60   61   62   63
     8     64   65   66   67   68   69   70   71
     9     72   73   74   75   76   77   78   79
    10     80   81   82   83   84   85   86   87
    11     88   89   90   91   92   93   94   95
    12     96   97   98   99  100  101  102  103
    13    104  105  106  107  108  109  110  111
    14    112  113  114  115  116  117  118  119
    15    120  121  122  123  124  125  126  127

The Java Run-Time Options

specified when you can not run -XX:ParallelGCThreads=1: out of memory is displayed

Java to start the GC thread the same number of physical CPU cores on the machine. Therefore, you may not be able to run on a large number of CPU Core machine uv as well as the program memory of that amount can not be secured, such as a PC can be run to output a message such as out of memory. May be solved the problem by using this option.

Top of Page Top of Page

The University of Tokyo The Institute of Medical Science

Copyright©2005-2019 Human Genome Center