Intel X25-m Mainstream SATA SSD

Published by Mathias Seiler on 08.09.08
Page:
« 1 2 (3) 4 5 6 7 »

How do we test?

Testenvironment

We recommend that readers who aren't interested in test procedures jump over this and the following page and head directly to the test results.

Test results



ModelIntel® X25-M Mainstream SATA Solid-State Drive80 GB (74 GiB effektiv)
MotherboardGigabyte X38-DQ6 
ChipsetIntel X381066 MHz
CPUIntel Xeon 30602.4 GHz
MemoryCorsair Dominator 9136 DDR22 GByte
Graphics cardGigabyte GeForce 7200 
Storage (system)Maxtor160 GByte
Operating systemsUbuntu 8.04.1 Hardy
Microsoft Windows XP
Kernel 2.6.24-19-server
SP3
FilesystemXFS 
Database-Management-SystemPostgreSQLVersion 8.3.3

We think everybody reading this article can imagine the following scenario: You just bought a hard drive which according the specs sheet should transfer 120 MByte/s reading and writing. In the reviews you read about astonishing 110 MByte/s but after you put the drive into you system it feels much slower. The whole story gets even worse when you start a benchmark which does randomread/write of 4 KByte blocks. There you only get two to three MBytes/s.

Because of this we don't want to publish screenshots of standard programs like HD-Tach, HD-Tune, ... we want our tests to be

  • reproducible,
  • accurate
  • meaningful and
  • varied

We test with activated caches and NCQ (Native Command Queueing) because they're also activated under daily use. But the data size tested is always at least twice the amount of the memory.

We noticed that the measuring error is constantly within ±2%. Therefore we mention it only here.

Databases fundamentally depend on the storage system. A modern DBMS is optimized for hard disks and not for SSDs. If you choose your storage system correct you can boost the performance of your database on a conventional hard disk significantly. Now it's interesting to see how a SSD performs under the same conditions. Therefore we use PostgreSQL version 8.3.3 with the script pgbench.

Additionally we evaluate the S.M.A.R.T. data to assess if there are already errors.

The following table give you a brief overview to which points we turn our centre of attention.

TestObservations
  
Sequential Read/Write Tests
  • Are the values within the specifications?
  • Which influence has the block size?
  • Which influence has the filesystems block size?
Random Read/Write Tests
  • How severe is the influence on the theoretically possible (sequential) datarate?
  • Which influence has the block size?
  • Which influence has the block size on the filesystem?
Database tests
  • How many transactions per second can the database deliver?
  • Does the speed stay constant if there are a lot of request simultaneously?
  • What is the CPU load?


Linux, XFS, why?

There are different reasons why we take a operating system based on a Linux Kernel instead of a fresh Windows Vista/XP installation with all Service Packs.

  • The filesystem XFS offers you a flexibility you can't get from NTFS.
  • The testprogram iozone runs natively under linux.
  • The test partly aims to server applications.
  • Iozone gives you data on a statistical basis (error, deviation, etc.)

Dispite we want to publish our subjective experiences with Windows XP: Boot time, program start and extracting archives.

The filesystem is going to be built as follows:

mkfs.xfs -d sunit=128 -d swidth=4096 -l size=128m -f /dev/sdb
We mounted with the following options:
mount -o rw,noatime,nousrquota,logbufs=8 /dev/sdb /mnt/ssd

Why do we test different block sizes?

It is important to reproduce scenarios of daily usage. Certain parameters need to be variable during the test to make a statement about the product. In our test the parameters are the different block sizes. It defines the size in KBytes which is written/read on the drive during a transaction.

With this method one can test the reading and writing of either small and big files. In a normal personal computer environment you usually don't find many files smaller than 16 KByte. The relative amount of small files is much bigger on a mail or database server. Therefore tests with small block sizes are of interest for database-based applications.

In bigger RAID arrays the hard disk cache is usually disabled and the RAID-Controller takes over the job of caching. Exactly in such setups hard drives need to be very fast when reading or writing small amounts of data. Sequential throughput isn't interesting in this case.

Furthermore smaller block sizes are more interesting because with big block sizes the reading and writing performance is constant on this SSD. But now lets step forward to the interesting part, the tests:

Database tests

Setup

With the script pgbench you can easily create a database with ten million entries. Afterwards these data will be read, manipulated and written back. We have to acknowledge that this is a very basic test but it underlines the differences between an SSD an a conventional hard drive.

The whole transaction looks as follows:

  • BEGIN;
  • UPDATE accounts SET abalance = abalance + :delta WHERE aid = :aid;
  • SELECT abalance FROM accounts WHERE aid = :aid;
  • UPDATE tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
  • UPDATE branches SET bbalance = bbalance + :delta WHERE bid = :bid;
  • INSERT INTO history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
  • END;

The test is done with the PostgreSQL standard settings. That we can ensure that the database is built we have to do the following:

CREATE TABLESPACE ssdbench LOCATION '/mnt/ssd/tablespace';
createdb -D ssdbench pgbench

Afterwards the database is going to be filled.

pgbench -i -s 100 pgbench

With this we enter ten million tupels into the database. Now we start the script which starts 10'000 transactions and 100 connections. The point is that the pipeline is filled with transactions that's why one chooses a very large amount of transactions:

pgbench -n -t 10000 -v -c 100 pgbench

Afterwards we get a result in transactions per second (tps):

transaction type: TPC-B (sort of)
scaling factor: 100
number of clients: 100
number of transactions per client: 10000
number of transactions actually processed: 1000000/1000000
tps = 1344.524649 (including connections establishing)
tps = 1344.802165 (excluding connections establishing)
Discuss this article in the forum




Navigate through the articles
Previous article Western Digital Raptor 150GB Seagate Barracuda 7200.11 1.5 TByte Next article
comments powered by Disqus

Intel X25-m Mainstream SATA SSD - Storage - Reviews - ocaholic