There are a lot of environments that are running MySQL and PostgreSQL to support their systems. My team at Nutanix and I have been getting a lot of enquiries about how to set up these databases for best performance, and customers have also been using them to benchmark and baseline different systems. One of the challenges with these databases is that they give only limited control over where data files and transaction logs can be placed, and this makes increasing parallelism of IO a little bit of a challenge. Your database is just an extension of your storage and all storage devices, even virtual ones, have a limited queue depth that you can work with. Unlike Oracle, SQL Server, Sybase, DB2, etc you can’t just create a whole bunch of mount points and spread your data files over them (which increases available queue depth and potential IO parallelism). But the solution to this problem is made quite simple with Linux LVM (Logical Volume Manager). I’ll take you through some of the steps I took to set up a test VM for MySQL testing with HammerDB and PostgreSQL for testing with PGBench.
Firstly what is HammerDB and PGBench?
HammerDB is an open source database testing tool that simulates a test that is styled along the lines of TPC-C. But it doesn’t implement a full TPC-C test and can’t be compared to published audited results. There is documentation on the HammerDB Test here, and the binaries can be downloaded from SourceForge here. Although these tests don’t compare to published audited results they can be used as a comparison between two systems that have run the same test. The output is two values, a TPM (Transactions Per Minute), which is database specific and can’t be compared between two different database types, and a NOPM value (New Orders Per Minute), which can be compared between different HammerDB tests and database types. NOPM is the more relevant metric to keep an eye on.
PGBench is a built in performance test benchmark that comes with PostgreSQL and it simulates a TPC-B type test, which is a database stress test. TPC-B as a test has been obsoleted by the TPC org, and the TPC-B test within PostgreSQL couldn’t be used to compare against any audited results, but it can be used to compare against like tests using the same tool. This should be used only to compare between different PostgreSQL database performance results.
Setting up a Linux LV for use with PostgreSQL or MySQL (Example based on PostgreSQL):
For the tests I was doing I used a striped logical volume (8k stripe size) across 8 disks of 10G each and copied the data directory of the postgresql to the new location. All eight disks were equally distributed across 4 vSCSI controllers (PVSCSI recommended with VMware vSphere). The total volume size was about 80GB.
1. vgcreate vol0 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi
2. lvcreate -i8 -I8 -l 100%FREE -n lvol0 vol0
3. mkfs.ext4 -m 0 /dev/vol0/lvol0
4. mkdir /home/postgres
5. Add “/dev/vol0/lvol0 /home/postgres ext4 defaults,noatime,nodiratime 0 0” in /etc/fstab
6. mount -a
7. By default, postgresql installs the data directory at /var/lib/pgsql/9.3/data. I copied the data directory to
/home/postgres/data
8. Now, when we start the postgres server, we should use the -D option to point the new location of the data directory.
/usr/pgsql-9.3/bin/postgres -D /home/postgres/data
This should now use the database at /home/postgres and write it across all 8 disks. vgdisplay and lvdisplay should look like this.
[root@RT17-CentOS-1 9.3]# vgdisplay
— Volume group —
VG Name vol0
System ID
Format lvm2
Metadata Areas 8
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 8
Act PV 8
VG Size 79.97 GiB
PE Size 4.00 MiB
Total PE 20472
Alloc PE / Size 20224 / 79.00 GiB
Free PE / Size 248 / 992.00 MiB
VG UUID hmmbRZ-xoHF-69K6-7DkM-EIqo-jqoY-3X8qz8
[root@RT17-CentOS-1 9.3]# lvdisplay
— Logical volume —
LV Path /dev/vol0/lvol0
LV Name lvol0
VG Name vol0
LV UUID ZHija9-OXek-59Ud-Q4uJ-gThy-55Zf-CJRIaD
LV Write Access read/write
LV Creation host, time CentOS-1.local, 2014-10-14 10:34:17 -0400
LV Status available
# open 1
LV Size 79.00 GiB
Current LE 20224
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 253:2
One additional step you could take it to create another logical volume and move the pg_xlog directory. This can also increase performance for PostgreSQL. For MySQL the steps are similar, except you move the MySQL Home Directory to the LVM, which you might mount at /home/mysql (check /etc/my.cnf for location details of the default install).
An example of a performance results based on PostgreSQL 9.3 from a single VM of 4 vCPU’s, 32GB RAM, running on a Nutanix 3050 node is below. The templates used to do the MySQL HammerDB and PostgreSQL PGBench testing are available from Nutanix via your SE or Nutanix Support.
PostgreSQL Test Results with PGBench:
Running tests using: psql -d pgbench
Script tpc-b.sql executing for 120 concurrent users
transaction type: Custom query
scaling factor: 1500
query mode: simple
number of clients: 120
number of threads: 6
duration: 300 s
number of transactions actually processed: 1021500
tps = 3402.171378 (including connections establishing)
tps = 3403.517277 (excluding connections establishing)
An example of the TPM for a single MySQL Database VM with 4 vCPU’s, 32GB RAM, running on a Nutanix 3050 node is below. NOPM is around 31K. Note that TPM from one database type can’t be compared to another database type. So the MySQL TPM below can’t be compared to the TPM of an Oracle database. But the NOPM can be compared. If you’re running a test with MySQL with a similar configuration though then you can compare the TPM.
Final Word
Before migrating any system to a new platform you should record or baseline the exiting performance and then validate the performance after the migration. You should make sure to read and follow any relevant vendor best practices to get the best out of your platforms. Any improvement in performance means less hardware overall needed to achieve acceptable performance, or the longer you can run a system before an upgrade is required. Although the HammerDB test doesn’t fully stress out the storage, the performance of your storage will make a difference to your test results. As will the set up of your OS and the CPU/Memory configuration of your VM’s and platform overall. It’s a good test of the overall platform and all it’s sub-components.
—
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2014 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
[…] Performance Testing MySQL and PostgreSQL with HammerDB and PGBench by Michael Webster […]