Western Digital Elements 1.5 TByte

Published by Marc Büchel on 25.07.10
Page:
« 1 2 3 (4) 5 6 7 »

How do we test?

Testenvironment


Model Western Digital Elements 1.5 TByte
Motherboard Gigabyte X38-DQ6  
Chipset Intel X38 1066 MHz
CPU Intel Xeon 3060 2.4 GHz
Memory Corsair Dominator 9136 DDR2 2 GByte
Graphics card Gigabyte GeForce 7200  
Storage (system) Maxtor 160 GByte
Operating systems Ubuntu 8.04.1 Hardy Kernel 2.6.24-19-server
Filesystem XFS  

We think everybody reading this article can imagine the following scenario: You just bought a hard drive which according the specs sheet should transfer 120 MByte/s reading and writing. In the reviews you read about astonishing 110 MByte/s but after you put the drive into you system it feels much slower. The whole story gets even worse when you start a benchmark which does randomread/write of 4 KByte blocks. There you only get two to three MBytes/s.

Because of this we don't want to publish screenshots of standard programs like HD-Tach, HD-Tune, ... we want our tests to be

  • reproducible,
  • accurate
  • meaningful and
  • varied

We test with activated caches and NCQ (Native Command Queueing) because they're also activated under daily use. But the data size tested is always at least twice the amount of the memory.

We noticed that the measuring error is constantly within ±2%. Therefore we mention it only here.

Additionally we evaluate the S.M.A.R.T. data to assess if there are already errors.

The following table give you a brief overview to which points we turn our centre of attention.

Test Observations
   
Sequential Read/Write Tests
  • Are the values within the specifications?
  • Which influence has the block size?
  • Which influence has the filesystems block size?
Random Read/Write Tests
  • How severe is the influence on the theoretically possible (sequential) datarate?
  • Which influence has the block size?
  • Which influence has the block size on the filesystem?
   


iozone3

iozone3 is a benchmark suit for storage solutions which natively runs under Linux.

Therefore we are testing the throughput with different block sizes using the following commands:

  • iozone -Rb test16k.xls -i0 -i1 -i2 -+n -r 16k -s4g -t2
  • iozone -Rb test32k.xls -i0 -i1 -i2 -+n -r 32k -s4g -t2
  • iozone -Rb test64k.xls -i0 -i1 -i2 -+n -r 64k -s4g -t2
  • iozone -Rb test128k.xls -i0 -i1 -i2 -+n -r 128k -s4g -t2
  • iozone -Rb test256k.xls -i0 -i1 -i2 -+n -r 256k -s4g -t2
  • iozone -Rb test512k.xls -i0 -i1 -i2 -+n -r 512k -s4g -t2

Why do we test different block sizes?

It is important to reproduce scenarios of daily usage. Certain parameters need to be variable during the test to make a statement about the product. In our test the parameters are the different block sizes. It defines the size in KBytes which is written/read on the drive during a transaction.

With this method one can test the reading and writing of either small and big files. In a normal personal computer environment you usually don't find many files smaller than 16 KByte. The relative amount of small files is much bigger on a mail or database server. Therefore tests with small block sizes are of interest for database-based applications.

In bigger RAID arrays the hard disk cache is usually disabled and the RAID-Controller takes over the job of caching. Exactly in such setups hard drives need to be very fast when reading or writing small amounts of data. Sequential throughput isn't interesting in this case.



Page 1 - Introduction Page 5 - Sequential read/write
Page 2 - Specifications Page 6 - Random read/write
Page 3 - Preview Page 7 - Fazit
Page 4 - How do we test?  



Discuss this article in the forum




Navigate through the articles
Previous article Samsung 64 GB SSD Corsair Force F100 with 100GByte Next article
comments powered by Disqus

Western Digital Elements 1.5 TByte - Storage - Reviews - ocaholic