メインコンテンツまでスキップ

12Gb SASケーブルがないため、SSDレイテンシが想定よりも高くなっています

Views:
61
Visibility:
Public
Votes:
0
Category:
aff-series
Specialty:
perf<a>AFF</a><a>1100761</a><a>2008083791</a>
Last Updated:

環境

  • ONTAP 9
  • Data ONTAP 8
  • AFF
  • 12Gb SASコネクタ

問題

  • 一部の状況で、All Flash FAS (AFF )ストレージコントローラのディスクレイヤでレイテンシが想定よりも高くなることがあります。
  • 想定よりも大きいレイテンシがstatit コマンドの出力または qos statistics volume latency show コマンドの出力で確認できます。

例:

  • qos statistics volume latency show ディスクのレイテンシを示すコマンド出力:

cluster::> qos statistics volume latency show
Workload            ID  Latency    Network  Cluster       Data     Disk  Qos Max    Qos Min      NVRAM
--------------- ------ --------   -------- --------   -------- -------- -------- ---------- ----------
-total-                 10.35ms     1.35ms      0ms        0us     9ms      0ms        0ms        0ms
vs1vol0            111  17.23ms        0us      0ms   603.00us  16.63ms      0ms        0ms        0ms
vol1              1234  17.76ms        0ms      0ms   150.00us  17.61ms      0ms        0ms        0ms
vol2               999   4.24ms        0us      0ms   190.00us   4.05ms      0ms        0ms        0ms

Cluster::> set -privilege diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y Cluster::*> statistics start -object disk -counter "io_queued|io_pending" Statistics collection is being started for sample-id: sample_148 Cluster::*> statistics show -filter "io_queued>10|io_pending>10" Object: disk Instance: 0d.23.13 Start-time: 12/5/2022 16:48:26 End-time: 12/5/2022 16:51:58 Elapsed-time: 212s Scope: node1 Number of Constituents: 1 (complete_aggregation) Counter Value -------------------------------- -------------------------------- io_queued 19 io_pending 8 ... Cluster::*> node run -node node1 -command statit -b Cluster::*> node run -node node1 -command statit -e Disk Statistics (per second) ut% is the percent of time the disk was busy. xfers is the number of data-transfer commands issued per second. xfers = ureads + writes + cpreads + greads + gwrites chain is the average number of 4K blocks per command. usecs is the average disk round-trip time per 4K block. disk ut% xfers ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites--chain-usecs /data_aggr1/plex0/rg0: 0c.01.2 80 2510.99 1525.62 5.95 763 389.58 11.11 2682 595.79 5.22 1318 0.00 .... . 0.00 .... . 0c.01.3 81 2502.32 1510.75 5.96 758 393.43 11.27 2706 598.14 5.15 1348 0.00 .... . 0.00 .... . 0c.01.4 81 2518.91 1528.01 5.93 762 392.18 11.45 2661 598.72 5.12 1364 0.00 .... . 0.00 .... . ...

 

Sign in to view the entire content of this KB article.

New to NetApp?

Learn more about our award-winning Support

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.