User Tools

Site Tools


informatica:linux:discos:benchmark

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
informatica:linux:discos:benchmark [2014/01/20 11:32] javiinformatica:linux:discos:benchmark [2015/04/13 20:19] (current) – external edit 127.0.0.1
Line 5: Line 5:
 Aqui se listan herramientas para analizar la velocidad de los discos y otro tipo de tests. Aqui se listan herramientas para analizar la velocidad de los discos y otro tipo de tests.
  
 +====== LBA ======
 +
 +Logical Block Address
 +
 +====== Sectores ======
 +
 +http://www.ibm.com/developerworks/library/l-4kb-sector-disks/
 +
 +  * Sector. 1 disco se divide en sectores.
 +  * Tradicionalmente el tamanyo de cada sector era de 512 bytes, pero desde 2010 se han generalizado los discos de 4096 bytes por sector.
 +  * Para mantener la compatibilidad, el sector fisico (de 4096 bytes cada uno) se divide en 8 logicos (de 512 bytes cada uno)
 +  * Para determinar el tamanyo de sectores fisico y logico:
 +<code>
 +sudo cat /sys/block/sda/queue/physical_block_size
 +4096
 +sudo cat /sys/block/sda/queue/logical_block_size
 +512
 +</code>
 +Otro metodo:
 +<code>
 +sudo fdisk -l | egrep "Disk|Sector" | grep -v "identifier"
 +</code>
 +Salida:
 +<code>
 +Disk /dev/md0 doesn't contain a valid partition table
 +Disk /dev/md1 doesn't contain a valid partition table
 +Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
 +Sector size (logical/physical): 512 bytes / 4096 bytes
 +Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
 +Sector size (logical/physical): 512 bytes / 4096 bytes
 +Disk /dev/md0: 983.2 GB, 983214915584 bytes
 +Sector size (logical/physical): 512 bytes / 4096 bytes
 +Disk /dev/md1: 16.8 GB, 16844193792 bytes
 +Sector size (logical/physical): 512 bytes / 4096 bytes
 +Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
 +Sector size (logical/physical): 512 bytes / 512 bytes
 +</code>
 +En este caso se trata de uno de 4096 bytes. Los viejos son de 512 bytes
 +
 +===== Alineacion =====
 +
 +http://www.ibm.com/developerworks/library/l-4kb-sector-disks/
 +
 +https://ata.wiki.kernel.org/index.php/ATA_4_KiB_sector_issues
 +
 +http://people.redhat.com/msnitzer/docs/io-limits.txt
 +
 +TODO: ver si es cierto o no que particiones no alineadas impactan en el rendimiento
 +
 +  * **SOLO** impacta negativamente, y asi lo informa "sudo fdisk -l" **si el tamanyo de los sectores fisicos es distinto al de los logicos**. Por ejemplo en un disco //Advanced Format// de Western Digital.
 +  * **Cada particion tiene que comenzar en un numero de sector divisible por 8** (relacion 4096 bytes fisico - 512 bytes logico)
 +  * El impacto es **solo negativo en operaciones de escritura**
 +
 +Con fdisk se puede alinear lanzando el siguiente comando cuando se particiona:
 +
 +  sudo fdisk -H 224 -S 56 /dev/sda
 ====== Obtener IOPS ====== ====== Obtener IOPS ======
  
Line 47: Line 103:
 | Average latency | ms | 4.16 | The time it takes for the sector of the disk being accessed to rotate into position under a read/write head. | | Average latency | ms | 4.16 | The time it takes for the sector of the disk being accessed to rotate into position under a read/write head. |
  
-IOPS = 1/(((Average seek, read / 1000) + (Average seek, write / 1000))/2 + (Average latency / 1000))+  IOPS = 1/(((Average seek, read / 1000) + (Average seek, write / 1000))/2 + (Average latency / 1000))
  
 En mi caso: En mi caso:
  
 +<code>
 IOPS = 1/(((8.5 / 1000) + (9.5 / 1000))/2 + (4.16 / 1000)) IOPS = 1/(((8.5 / 1000) + (9.5 / 1000))/2 + (4.16 / 1000))
 IOPS = 1/((0.0085 + 0.0095)/2 + 0.00416) IOPS = 1/((0.0085 + 0.0095)/2 + 0.00416)
Line 56: Line 113:
 IOPS = 1/(0.01316) IOPS = 1/(0.01316)
 IOPS = 75.99 IOPS = 75.99
 +</code>
  
 Es decir, el disco es capaz de realizar **76 operaciones de lectura o escritura por segundo**. Es decir, el disco es capaz de realizar **76 operaciones de lectura o escritura por segundo**.
  
 +===== Penalizacion IOPS en RAID =====
  
 +La siguiente tabla muestra el numero de IOPS que realiza un RAID en funcion del nivel y del tipo de operacion.
 +
 +Por ejemplo para escribir sobre un RAID 1 se necesitan 2 IOPS. Por tanto el numero de IOPS calculado en el paso anterior se tiene que dividir por 2.
 +
 +^ RAID level ^ Read ^ Write ^
 +| RAID 0 | 1 | 1 |
 +| RAID 1 (and 10) | 1 | 2 |
 +| RAID 5 | 1 | 4 |
 +| RAID 6 | 1 | 6 |
  
 ====== Tests de rendimiento ====== ====== Tests de rendimiento ======
Line 175: Line 243:
  
 ===== iostat ===== ===== iostat =====
 +
 +http://sebastien.godard.pagesperso-orange.fr/
  
   * Instalar en Debian:   * Instalar en Debian:
Line 190: Line 260:
 sda               1.36     4.00    8.62    3.42   237.55   125.16    60.27     0.25   20.60   13.77   37.80   4.49   5.40 sda               1.36     4.00    8.62    3.42   237.55   125.16    60.27     0.25   20.60   13.77   37.80   4.49   5.40
 </code> </code>
 +
 +^ Campo ^ Explicacion ^
 +| %user| Show the percentage of CPU utilization that occurred while executing at the user level (application). |
 +| %nice| Show the percentage of CPU utilization that occurred while executing at the user level with nice priority. |
 +| %system| Show the percentage of CPU utilization that occurred while executing at the system level (kernel). |
 +| %iowait| Show the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. |
 +| %steal| Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor. |
 +| %idle| Show the percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. |
 +| Device:| This column gives the device (or partition) name as listed in the /dev directory. |
 +| tps| Indicate  the  number  of  transfers per second that were issued to the device. A transfer is an I/O request to the device. Multiple logical requests can be combined into a single I/O request to the device. A transfer is of indeterminate size. |
 +| Blk_read/s (kB_read/s, MB_read/s)| Indicate the amount of data read from the device expressed in a number of blocks (kilobytes, megabytes) per second. Blocks are  equivalent  to  sectors  and therefore have a size of 512 bytes. |
 +| Blk_wrtn/s (kB_wrtn/s, MB_wrtn/s)| Indicate the amount of data written to the device expressed in a number of blocks (kilobytes, megabytes) per second. |
 +| Blk_read (kB_read, MB_read)| The total number of blocks (kilobytes, megabytes) read. |
 +| Blk_wrtn (kB_wrtn, MB_wrtn)| The total number of blocks (kilobytes, megabytes) written. |
 +| rrqm/s| The number of read requests merged per second that were queued to the device. |
 +| wrqm/s| The number of write requests merged per second that were queued to the device. |
 +| r/s| The number (after merges) of read requests completed per second for the device. |
 +| w/s| The number (after merges) of write requests completed per second for the device. |
 +| rsec/s (rkB/s, rMB/s)| The number of sectors (kilobytes, megabytes) read from the device per second. |
 +| wsec/s (wkB/s, wMB/s)| The number of sectors (kilobytes, megabytes) written to the device per second. |
 +| avgrq-sz| The average size (in sectors) of the requests that were issued to the device. |
 +| avgqu-sz| The average queue length of the requests that were issued to the device. |
 +| await| The  average  time  (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. |
 +| r_await| The average time (in milliseconds) for read requests issued to the device to be served. This includes the time spent by the requests in queue and  the  time spent servicing them. |
 +| w_await| The  average time (in milliseconds) for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. |
 +| svctm| The average service time (in milliseconds) for I/O requests that were issued to the device. Warning! Do not trust this field any more.  This field  will  be removed in a future sysstat version. |
 +| %util| Percentage  of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100% for devices serving requests serially.  But for devices serving requests in parallel, such as RAID arrays and modern SSDs, this number does not reflect their performance limits.|
 +
 +
 +
 avgrq-sz: The average size (in sectors) of the requests that were issued  to the device. avgrq-sz: The average size (in sectors) of the requests that were issued  to the device.
 En este caso es **60.27** En este caso es **60.27**
Line 223: Line 323:
 sudo cat /sys/block/vda/queue/physical_block_size sudo cat /sys/block/vda/queue/physical_block_size
 </code> </code>
 +
  
informatica/linux/discos/benchmark.1390217532.txt.gz · Last modified: 2015/04/13 20:19 (external edit)