Services validation aptitude

De Wiki LDN
Aller à : navigation, rechercher


Le titre est bonnie mais en réalité, c'est un résumé de la session de tests de perfs.
=====================================================
== Ma machine pour comparer =========================
=====================================================
  fio --size=2g --bs=64k --rw=write --ioengine=libaio --name=fio => 118489KB/s
  hdparm -t => 120.19 MB/sec (Mécanique)
  hdparm -t => 383.54 MB/sec (SSD)
=====================================================
=== Host (services) =================================
=====================================================
# Config
Les disques de services sont des Western Caviar Green de 1.5To, ayant tous pour ref WDC WD15EADS-00P8B0 (fw: 01.00A01) (hdparm -i)
Au niveau raid c'est du classique RAID5 (3 disques) via mdadm.
/dev/md1:
        Version : 1.2
  Creation Time : Mon Jul 15 10:35:52 2013
     Raid Level : raid5
     Array Size : 2929816576 (2794.09 GiB 3000.13 GB)
  Used Dev Size : 1464908288 (1397.05 GiB 1500.07 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent
    Update Time : Wed Apr 23 11:05:35 2014
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
         Layout : left-symmetric
     Chunk Size : 512K
           Name : services:1
           UUID : 1ce82a81:c375123b:3660348b:6963a293
         Events : 274
    Number   Major   Minor   RaidDevice State
       0       8       34        0      active sync   /dev/sdc2
       1       8        5        1      active sync   /dev/sda5
       2       8       21        2      active sync   /dev/sdb5
Au niveau tunning raid, le Read ahead à été modifié (256 valeur par défaut)
$ blockdev --setra 4096 /dev/md1
Et le stripe_cache_size à été aussi modifié
$ cat /sys/block/md1/md/stripe_cache_size
8192
# Tests
               hdparm -t       
-------------------------------
  sda           84.89 MB/sec   
  sdb           95.32 MB/sec
  sdc           91.49 MB/sec
  services-root 171.78MB/sec  
  
fio --size=2g --bs=64k --rw=write --ioengine=libaio --name=fio : 2188.4MB/s (!!) puis 175994KB/s et 243656KB/s
fio --size=2g --bs=64k --rw=read --ioengine=libaio --name=fio : 157929KB/s, 169302KB/s, 173691KB/s, 164250KB/s
gcorona@services:~/temp$ sudo chown gcorona .
gcorona@services:~/temp$ /usr/sbin/bonnie++ -s 50g -d $(pwd)
Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
services.ldn-fa 50G  2468  87 97946   7 43630   2  3235  54 163693   4 254.1   8
Latency             16586us    3616ms    2074ms     307ms     237ms     153ms
Version  1.96       ------Sequential Create------ --------Random Create--------
services.ldn-fai.ne -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  4071  33 +++++ +++ 13655   2 11034  83 +++++ +++ +++++ +++
Latency             12353us     129us      72us   29677us      51us     366us
1.96,1.96,services.ldn-fai.net,1,1398185044,50G,,2468,87,97946,7,43630,2,3235,54,163693,4,254.1,8,16,,,,,4071,33,+++++,+++,13655,2,11034,83,+++++,+++,+++++,+++,16586us,3616ms,2074ms,307ms,237ms,153ms,12353us,129us,72us,29677us,51us,366us
=====================================================
=== Depuis la VM (WTF???)                         ===
=====================================================
dvb est une partition avec une seule couche de LVM
/dev/vdb:
 Timing buffered disk reads: 352 MB in  3.06 seconds = 114.90 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vda
/dev/vda:
 Timing buffered disk reads: 464 MB in  3.01 seconds = 154.15 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vda
/dev/vda:
 Timing buffered disk reads: 460 MB in  3.01 seconds = 152.83 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 622 MB in  3.02 seconds = 205.67 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 1000 MB in  3.01 seconds = 332.63 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 1252 MB in  3.04 seconds = 412.51 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 1558 MB in  3.05 seconds = 510.64 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 1722 MB in  3.01 seconds = 571.82 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 1972 MB in  3.02 seconds = 653.72 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 2118 MB in  3.00 seconds = 705.32 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 2324 MB in  3.00 seconds = 773.74 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vdb
/dev/vdb:
 Timing buffered disk reads: 2552 MB in  3.02 seconds = 845.06 MB/sec
gcorona@blender:~/temp$ sudo hdparm -t /dev/vda
gcorona@blender:~/temp$ fio --size=2g --bs=64k --rw=write --ioengine=libaio --name=fio
fio: (g=0): rw=write, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=1
2.0.8
Starting 1 process
fio: Laying out IO file(s) (1 file(s) / 2048MB)
Jobs: 1 (f=1): [W] [100.0% done] [0K/27300K /s] [0 /426  iops] [eta 00m:00s]          
fio: (groupid=0, jobs=1): err= 0: pid=25865
  write: io=2048.0MB, bw=25773KB/s, iops=402 , runt= 81371msec
    slat (usec): min=15 , max=464309 , avg=2477.75, stdev=5778.96
    clat (usec): min=0 , max=2234 , avg= 1.10, stdev=12.37
     lat (usec): min=16 , max=464311 , avg=2480.10, stdev=5780.05
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    1], 40.00th=[    1], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    2], 90.00th=[    2], 95.00th=[    2],
     | 99.00th=[    3], 99.50th=[    4], 99.90th=[   10], 99.95th=[   11],
     | 99.99th=[   29]
    bw (KB/s)  : min= 4353, max=189849, per=100.00%, avg=25810.33, stdev=21757.74
    lat (usec) : 2=78.34%, 4=21.14%, 10=0.33%, 20=0.16%, 50=0.02%
    lat (usec) : 100=0.01%
    lat (msec) : 4=0.01%
  cpu          : usr=0.08%, sys=1.83%, ctx=12144, majf=1, minf=20
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=32768/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
  WRITE: io=2048.0MB, aggrb=25772KB/s, minb=25772KB/s, maxb=25772KB/s, mint=81371msec, maxt=81371msec
Disk stats (read/write):
  vdb: ios=0/4289, merge=0/103, ticks=0/12596244, in_queue=12592400, util=100.00%
gcorona@blender:~/temp$ fio --size=2g --bs=64k --rw=write --ioengine=libaio --name=fio
fio: (g=0): rw=write, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=1
2.0.8
Starting 1 process
Jobs: 1 (f=1): [W] [100.0% done] [0K/19884K /s] [0 /310  iops] [eta 00m:00s]           
fio: (groupid=0, jobs=1): err= 0: pid=26397
  write: io=2048.0MB, bw=19908KB/s, iops=311 , runt=105344msec
    slat (usec): min=15 , max=130181 , avg=2726.25, stdev=5445.86
    clat (usec): min=0 , max=121 , avg= 1.02, stdev= 1.35
     lat (usec): min=15 , max=130183 , avg=2728.25, stdev=5447.08
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    1], 40.00th=[    1], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    2], 90.00th=[    2], 95.00th=[    2],
     | 99.00th=[    3], 99.50th=[    3], 99.90th=[   10], 99.95th=[   27],
     | 99.99th=[   36]
    bw (KB/s)  : min=    4, max=804844, per=100.00%, avg=23385.34, stdev=59855.44
    lat (usec) : 2=79.57%, 4=19.97%, 10=0.30%, 20=0.09%, 50=0.06%
    lat (usec) : 100=0.01%, 250=0.01%
  cpu          : usr=0.05%, sys=1.43%, ctx=7570, majf=0, minf=22
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=0/w=32768/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
  WRITE: io=2048.0MB, aggrb=19907KB/s, minb=19907KB/s, maxb=19907KB/s, mint=105344msec, maxt=105344msec
Disk stats (read/write):
  vdb: ios=0/3740, merge=0/98, ticks=0/14862032, in_queue=15315168, util=100.00%
gcorona@blender:~/temp$ fio --size=2g --bs=64k --rw=read --ioengine=libaio --name=fio
fio: (g=0): rw=read, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=1
2.0.8
Starting 1 process
Jobs: 1 (f=1)
fio: (groupid=0, jobs=1): err= 0: pid=28358
  read : io=2048.0MB, bw=2000.0MB/s, iops=32000 , runt=  1024msec
    slat (usec): min=6 , max=3004 , avg=26.12, stdev=53.72
    clat (usec): min=0 , max=175 , avg= 0.71, stdev= 1.65
     lat (usec): min=6 , max=3007 , avg=27.24, stdev=53.99
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    0], 40.00th=[    1], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    2], 99.50th=[    5], 99.90th=[   21], 99.95th=[   23],
     | 99.99th=[   35]
    bw (MB/s)  : min= 1884, max= 2217, per=100.00%, avg=2051.01, stdev=235.96
    lat (usec) : 2=97.81%, 4=1.68%, 10=0.10%, 20=0.28%, 50=0.12%
    lat (usec) : 100=0.01%, 250=0.01%
  cpu          : usr=7.43%, sys=79.37%, ctx=7342, majf=0, minf=38
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=32768/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
   READ: io=2048.0MB, aggrb=2000.0MB/s, minb=2000.0MB/s, maxb=2000.0MB/s, mint=1024msec, maxt=1024msec
Disk stats (read/write):
  vdb: ios=7828/0, merge=0/0, ticks=724/0, in_queue=720, util=63.33%
gcorona@blender:~/temp$ fio --size=2g --bs=64k --rw=read --ioengine=libaio --name=fio
fio: (g=0): rw=read, bs=64K-64K/64K-64K, ioengine=libaio, iodepth=1
2.0.8
Starting 1 process
Jobs: 1 (f=1)
fio: (groupid=0, jobs=1): err= 0: pid=28381
  read : io=2048.0MB, bw=1961.8MB/s, iops=31386 , runt=  1044msec
    slat (usec): min=5 , max=37994 , avg=26.70, stdev=215.98
    clat (usec): min=0 , max=52 , avg= 0.69, stdev= 1.39
     lat (usec): min=6 , max=37997 , avg=27.77, stdev=216.04
    clat percentiles (usec):
     |  1.00th=[    0],  5.00th=[    0], 10.00th=[    0], 20.00th=[    0],
     | 30.00th=[    0], 40.00th=[    0], 50.00th=[    1], 60.00th=[    1],
     | 70.00th=[    1], 80.00th=[    1], 90.00th=[    1], 95.00th=[    1],
     | 99.00th=[    2], 99.50th=[    9], 99.90th=[   20], 99.95th=[   23],
     | 99.99th=[   32]
    bw (MB/s)  : min= 1745, max= 2284, per=100.00%, avg=2015.10, stdev=381.59
    lat (usec) : 2=98.46%, 4=0.99%, 10=0.10%, 20=0.30%, 50=0.13%
    lat (usec) : 100=0.01%
  cpu          : usr=6.14%, sys=76.32%, ctx=7542, majf=0, minf=37
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=32768/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
   READ: io=2048.0MB, aggrb=1961.8MB/s, minb=1961.8MB/s, maxb=1961.8MB/s, mint=1044msec, maxt=1044msec
Disk stats (read/write):
  vdb: ios=7664/0, merge=0/0, ticks=796/0, in_queue=788, util=63.26%
======================================================
== Résultats précédents (avec les VMS qui tournent) ==
======================================================
    
    
  FS \ système         | host       guest
  ---------------------+--------------------
  host (au dessus de 4)|87029KB/s  ?
  guest (8)            |71712KB/s  46717KB/s
  Résultats de "fio --size=2g --bs=64k --rw=write --ioengine=libaio --name=fio"
  FS \ système         |host     guest
  ---------------------+-------------------
  host (au dessus de 4)|05.71s   ?
  guest (8)            |06.27s   35.s
  Résultats de "time dd bs=1M count=256 if=/dev/zero of=ff.out conv=fdatasync" (durée)
  FS \ système         |host     guest
  ---------------------+-------------------
  host (au dessus de 4)|45MB/s   ?
  guest (8)            |41MB/s   7MB/s
  Résultats de "time dd bs=1M count=256 if=/dev/zero of=ff.out conv=fdatasync" (débit)

NCQ

cat /sys/block/sdX/device/queue_depth
if you get a number greater than 1 it is enabled

enable via
echo 31 > /sys/block/sdX/device/queue_depth
disable via
echo 1 > /sys/block/sdX/device/queue_depth

Tunning

#!bin/sh

# http://chiliproject.tetaneutral.net/projects/tetaneutral/wiki/Cluster_Ganeti

KVM_DISKS="WDC_WD15EADS-00_WD-WCAVU0237611 WDC_WD15EADS-00_WD-WMAVU0297159 WDC_WD15EADS-00_WD-WMAVU0892520"

for diskname in $KVM_DISKS ; do
    disk=$(basename $(readlink -e /dev/disk/by-id/scsi-SATA_$diskname) 2>/dev/null)
    [ -z "$disk" -o ! -d "/sys/block/$disk" ] && continue
    echo deadline > /sys/block/${disk}/queue/scheduler
    echo 1 > /sys/block/${disk}/queue/iosched/fifo_batch
    echo 0 > /sys/block/${disk}/queue/iosched/front_merges
    echo 2 > /sys/block/${disk}/queue/iosched/read_expire
    echo 2 > /sys/block/${disk}/queue/iosched/write_expire
    echo 1 > /sys/block/${disk}/queue/iosched/writes_starved
done

Tests

  • test avant (cfq)
root@graphite-4:~# cat /sys/block/sda/queue/scheduler 
noop deadline [cfq]
root@graphite-4:/tmp/foo# time tar xzf wheezy-x64-prod-1.7.tgz 

real    0m48.315s
user    0m44.711s
sys     0m11.737s
root@graphite-4:/tmp/foo# du -sh wheezy-x64-prod-1.7.tgz 
1.1G    wheezy-x64-prod-1.7.tgz
  • changement de scheduler
root@graphite-4:~# bash tunning.sh 
+ KVM_DISKS='wwn-0x50015178f36802eb wwn-0x50015178f368041d'
+ for diskname in '$KVM_DISKS'
+++ readlink -e /dev/disk/by-id/wwn-0x50015178f36802eb
++ basename /dev/sda
+ disk=sda
+ '[' -z sda -o '!' -d /sys/block/sda ']'
+ echo deadline
+ echo 1
+ echo 0
+ echo 2
+ echo 2
+ echo 1
+ for diskname in '$KVM_DISKS'
+++ readlink -e /dev/disk/by-id/wwn-0x50015178f368041d
++ basename /dev/sdb
+ disk=sdb
+ '[' -z sdb -o '!' -d /sys/block/sdb ']'
+ echo deadline
+ echo 1
+ echo 0
+ echo 2
+ echo 2
+ echo 1
root@graphite-4:~# cat /sys/block/sda/queue/scheduler 
noop [deadline] cfq 
  • tests deadline
root@graphite-4:/tmp/foo2# sync
root@graphite-4:/tmp/foo2# time tar xzf wheezy-x64-prod-1.7.tgz 

real    0m47.260s
user    0m45.443s
sys     0m12.101s

Benchmark scheduler

Recommandations KVM d'IBM

Source :

Dump : File:liaatbestpractices_ibm_kvm.pdf