From fccb2ff35b0a7dc806d5681616a528c2d81492de Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 6 Aug 2019 23:47:21 +0800 Subject: [PATCH] TSL --- ...90510 Check storage performance with dd.md | 124 ++++++++---------- 1 file changed, 56 insertions(+), 68 deletions(-) rename {sources => translated}/tech/20190510 Check storage performance with dd.md (55%) diff --git a/sources/tech/20190510 Check storage performance with dd.md b/translated/tech/20190510 Check storage performance with dd.md similarity index 55% rename from sources/tech/20190510 Check storage performance with dd.md rename to translated/tech/20190510 Check storage performance with dd.md index 78d7eab391..d1479023fc 100644 --- a/sources/tech/20190510 Check storage performance with dd.md +++ b/translated/tech/20190510 Check storage performance with dd.md @@ -7,68 +7,60 @@ [#]: via: (https://fedoramagazine.org/check-storage-performance-with-dd/) [#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/) -Check storage performance with dd +使用 dd 检查存储性能 ====== ![][1] -This article includes some example commands to show you how to get a _rough_ estimate of hard drive and RAID array performance using the _dd_ command. Accurate measurements would have to take into account things like [write amplification][2] and [system call overhead][3], which this guide does not. For a tool that might give more accurate results, you might want to consider using [hdparm][4]. +本文包含一些示例命令,向你展示如何使用 `dd` 命令*粗略*估计硬盘驱动器和 RAID 阵列的性能。准确的测量必须考虑诸如[写入放大][2]和[系统调用开销][3]之类的事情,本指南不会考虑这些。对于可能提供更准确结果的工具,你可能需要考虑使用 [hdparm][4]。 -To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. **WARNING** : The _write_ tests will destroy any data on the block devices against which they are run. **Do not run them against any device that contains data you want to keep!** +为了分解与文件系统相关的性能问题,这些示例显示了如何通过直接读取和写入块设备来在块级测试驱动器和阵列的性能。**警告**:*写入*测试将会销毁针用来运行测试的块设备上的所有数据。**不要对包含你想要保留的数据的任何设备运行这些测试!** -### Four tests +### 四个测试 -Below are four example dd commands that can be used to test the performance of a block device: +下面是四个示例 `dd` 命令,可用于测试块设备的性能: - 1. One process reading from $MY_DISK: +1、 从 `$MY_DISK` 读取的一个进程: ``` # dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache ``` - 2. One process writing to $MY_DISK: +2、写入到 `$MY_DISK` 的一个进程: ``` # dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct ``` - 3. Two processes reading concurrently from $MY_DISK: +3、从 `$MY_DISK` 并发读取的两个进程: ``` # (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) ``` - 4. Two processes writing concurrently to $MY_DISK: +4、 并发写入到 `$MY_DISK` 的两个进程: ``` # (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) ``` +- 执行读写测试时,相应的 `iflag=nocache` 和 `oflag=direct` 参数非常重要,因为没有它们,`dd` 命令有时会显示从[内存][5]中传输数据的结果速度,而不是从硬盘。 +- `bs` 和 `count` 参数的值有些随意,我选择的值应足够大,以便在大多数情况下为当前硬件提供合适的平均值。 +- `null` 和 `zero` 设备在读写测试中分别用于目标和源,因为它们足够快,不会成为性能测试中的限制因素。 +- 并发读写测试中第二个 `dd` 命令的 `skip=200` 参数是为了确保 `dd` 的两个副本在硬盘驱动器的不同区域上运行。 +### 16 个示例 +下面是演示,显示针对以下四个块设备中之一运行上述四个测试中的各个结果: -– The _iflag=nocache_ and _oflag=direct_ parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from [RAM][5] rather than the hard drive. +1. `MY_DISK=/dev/sda2`(用在示例 1-X 中) +2. `MY_DISK=/dev/sdb2`(用在示例 2-X 中) +3. `MY_DISK=/dev/md/stripped`(用在示例 3-X 中) +4. `MY_DISK=/dev/md/mirrored`(用在示例 4-X 中) -– The values for the _bs_ and _count_ parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware. +本指南末尾提供了在 PC 上运行这些测试的视频演示。 -– The _null_ and _zero_ devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests. - -– The _skip=200_ parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive. - -### 16 examples - -Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices: - - 1. MY_DISK=/dev/sda2 (used in examples 1-X) - 2. MY_DISK=/dev/sdb2 (used in examples 2-X) - 3. MY_DISK=/dev/md/stripped (used in examples 3-X) - 4. MY_DISK=/dev/md/mirrored (used in examples 4-X) - - - -A video demonstration of the these tests being run on a PC is provided at the end of this guide. - -Begin by putting your computer into _rescue_ mode to reduce the chances that disk I/O from background services might randomly affect your test results. **WARNING** : This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your _root_ password to get into rescue mode. The _passwd_ command, when run as the root user, will prompt you to (re)set your root account password. +首先将计算机置于*救援*模式,以减少后台服务的磁盘 I/O 随机影响测试结果的可能性。**警告**:这将关闭所有非必要的程序和服务。在运行这些命令之前,请务必保存你的工作。你需要知道 `root` 密码才能进入救援模式。`passwd` 命令以 `root` 用户身份运行时,将提示你(重新)设置 `root` 帐户密码。 ``` $ sudo -i @@ -77,14 +69,14 @@ $ sudo -i # systemctl rescue ``` -You might also want to temporarily disable logging to disk: +你可能还想暂时禁止将日志记录到磁盘: ``` # sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf # systemctl restart systemd-journald.service ``` -If you have a swap device, it can be temporarily disabled and used to perform the following tests: +如果你有交换设备,可以暂时禁用它并用于执行以下测试: ``` # swapoff -a @@ -93,7 +85,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th # mdadm --zero-superblock $MY_DEVS ``` -#### Example 1-1 (reading from sda) +#### 示例 1-1 (从 sda 读取) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) @@ -106,7 +98,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s ``` -#### Example 1-2 (writing to sda) +#### 示例 1-2 (写入到 sda) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) @@ -119,7 +111,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s ``` -#### Example 1-3 (reading concurrently from sda) +#### 示例 1-3 (从 sda 并发读取) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) @@ -135,7 +127,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s ``` -#### Example 1-4 (writing concurrently to sda) +#### 示例 1-4 (并发写入到 sda) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) @@ -150,7 +142,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s ``` -#### Example 2-1 (reading from sdb) +#### 示例 2-1 (从 sdb 读取) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) @@ -163,7 +155,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s ``` -#### Example 2-2 (writing to sdb) +#### 示例 2-2 (写入到 sdb) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) @@ -176,7 +168,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s ``` -#### Example 2-3 (reading concurrently from sdb) +#### 示例 2-3 (从 sdb 并发读取) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) @@ -192,7 +184,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s ``` -#### Example 2-4 (writing concurrently to sdb) +#### 示例 2-4 (并发写入到 sdb) ``` # MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) @@ -208,7 +200,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s ``` -#### Example 3-1 (reading from RAID0) +#### 示例 3-1 (从 RAID0 读取) ``` # mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS @@ -222,7 +214,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s ``` -#### Example 3-2 (writing to RAID0) +#### 示例 3-2 (写入到 RAID0) ``` # MY_DISK=/dev/md/stripped @@ -235,7 +227,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s ``` -#### Example 3-3 (reading concurrently from RAID0) +#### 示例 3-3 (从 RAID0 并发读取) ``` # MY_DISK=/dev/md/stripped @@ -251,7 +243,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s ``` -#### Example 3-4 (writing concurrently to RAID0) +#### 示例 3-4 (并发写入到 RAID0) ``` # MY_DISK=/dev/md/stripped @@ -267,7 +259,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s ``` -#### Example 4-1 (reading from RAID1) +#### 示例 4-1 (从 RAID1 读取) ``` # mdadm --stop /dev/md/stripped @@ -282,7 +274,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s ``` -#### Example 4-2 (writing to RAID1) +#### 示例 4-2 (写入到 RAID1) ``` # MY_DISK=/dev/md/mirrored @@ -295,7 +287,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s ``` -#### Example 4-3 (reading concurrently from RAID1) +#### 示例 4-3 (从 RAID1 并发读取) ``` # MY_DISK=/dev/md/mirrored @@ -311,7 +303,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s ``` -#### Example 4-4 (writing concurrently to RAID1) +#### 示例 4-4 (并发写入到 RAID1) ``` # MY_DISK=/dev/md/mirrored @@ -327,7 +319,7 @@ If you have a swap device, it can be temporarily disabled and used to perform th 209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s ``` -#### Restore your swap device and journald configuration +#### 恢复交换设备和日志配置 ``` # mdadm --stop /dev/md/stripped /dev/md/mirrored @@ -339,23 +331,23 @@ If you have a swap device, it can be temporarily disabled and used to perform th # reboot ``` -### Interpreting the results +### 结果解读 -Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s. +示例 1-1、1-2、2-1 和 2-2 表明我的每个驱动器以大约 125 MB/s 的速度读写。 -Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s). +示例 1-3、1-4、2-3 和 2-4 表明,当在同一驱动器上并行完成两次读取或写入时,每个进程的驱动器带宽大约为一半(60 MB/s)。 -The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a [catastrophic failure][6]. +3-X 示例显示了将两个驱动器放在 RAID0(数据条带化)阵列中的性能优势。在所有情况下,这些数字表明 RAID0 阵列的执行速度是任何一个驱动器能够独立提供的速度的两倍。权衡的是,丢失所有内容的可能性也是两倍,因为每个驱动器只包含一半的数据。一个三个驱动器阵列的执行速度是单个驱动器的三倍(所有驱动器规格都相同),但遭受[灾难性故障][6]的可能也是三倍。 -The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost [if a drive fails][7]. +4-X 示例显示 RAID1(数据镜像)阵列的性能类似于单个磁盘的性能,除了多个进程同时读取的情况(示例4-3)。在多个进程读取的情况下,RAID1 阵列的性能类似于 RAID0 阵列的性能。这意味着你将看到 RAID1 的性能优势,但仅限于进程同时读取时。例如,当你在前台使用 Web 浏览器或电子邮件客户端时,进程会尝试访问后台中的大量文件。RAID1 的主要好处是,[如果驱动器出现故障][7],你的数据不太可能丢失。 -### Video demo +### 视频演示 -Testing storage throughput using dd +[使用 dd 测试存储吞吐量](https://youtu.be/wbLX239-ysQ) -### Troubleshooting +### 故障排除 -If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology ([SMART][8]). If your drive supports it, the _smartctl_ command can be used to query your hard drive for its internal statistics: +如果上述测试未按预期执行,则可能是驱动器出现坏块或故障。大多数现代硬盘都内置了自我监控、分析和报告技术([SMART][8])。如果你的驱动器支持它,`smartctl` 命令可用于查询你的硬盘驱动器的内部统计信息: ``` # smartctl --health /dev/sda @@ -363,21 +355,21 @@ If the above tests aren’t performing as you expect, you might have a bad or fa # smartctl -x /dev/sda ``` -Another way that you might be able to tune your PC for better performance is by changing your [I/O scheduler][9]. Linux systems support several I/O schedulers and the current default for Fedora systems is the [multiqueue][10] variant of the [deadline][11] scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions. +另一种可以调整 PC 以获得更好性能的方法是更改 [I/O 调度程序][9]。Linux 系统支持多个 I/O 调度程序,Fedora 系统的当前默认值是 [deadline][11] 调度程序的 [multiqueue][10] 变体。默认情况下它的整体性能非常好,并且对于具有许多处理器和大型磁盘阵列的大型服服务器扩展性极为出色。但是,有一些更专业的调度程序在某些条件下可能表现更好。 -To view which I/O scheduler your drives are using, issue the following command: +要查看驱动器正在使用的 I/O 调度程序,请运行以下命令: ``` $ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done ``` -You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block//queue/scheduler file: +你可以通过将所需调度程序的名称写入 `/sys/block//queue/scheduler` 文件来更改驱动器的调度程序: ``` # echo bfq > /sys/block/sda/queue/scheduler ``` -You can make your changes permanent by creating a [udev rule][12] for your drive. The following example shows how to create a udev rule that will set all [rotational drives][13] to use the [BFQ][14] I/O scheduler: +你可以通过为驱动器创建 [udev 规则][12]来永久更改。以下示例显示如何创建将所有的[旋转式驱动器][13]设置为使用 [BFQ][14] I/O 调度程序的 udev 规则: ``` # cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules @@ -385,7 +377,7 @@ ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue END ``` -Here is another example that sets all [solid-state drives][15] to use the [NOOP][16] I/O scheduler: +这是另一个设置所有的[固态驱动器][15]使用 [NOOP][16] I/O 调度程序的示例: ``` # cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules @@ -393,11 +385,7 @@ ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue END ``` -Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering. - -* * * - -_Photo by _[ _James Donovan_][17]_ on _[_Unsplash_][18]_._ +更改 I/O 调度程序不会影响设备的原始吞吐量,但通过优先考虑后台任务的带宽或消除不必要的块重新排序,可能会使你的 PC 看起来响应更快。 -------------------------------------------------------------------------------- @@ -405,7 +393,7 @@ via: https://fedoramagazine.org/check-storage-performance-with-dd/ 作者:[Gregory Bartholomew][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[wxy](https://github.com/wxy) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出