HammerDB is a very popular benchmark tool for testing multiple different database engines, including Oracle, SQL Server and PostgreSQL. This is a brief article to bring your attention to some ways you may improve your results and get more valid benchmark data.
We do a lot of testing with HammerDB at Nutanix. We run multiple database engines through HammerDB for each release of software we release. One of the great things about running databases on Nutanix is that once you find out how much performance you get from a given configuration of a node or database, you can scale it linearly and get the same experience. This fact was demonstrated by a test that Gary Little (Performance Engineering at Nutanix) did some time ago and published in his article SuperScalin’: How I learned to stop worrying and love SQL Server on Nutanix – Recommended Reading. Over the years we’ve found some things when using HammerDB that can improve consistency and reliability of test results.
Sometimes SQL Server might only use a single NUMA node due to the way connections and transactions are assigned in the engine. Gary describes the problem and solution in his article SQL Server uses only one NUMA Node with HammerDB. The solution is a slight modification to how the HammerDB scripts execute.
Another issue we’ve frequently run into when testing large SQL Server databases that needs lots of cores and memory is that not all the cores get used. Often you might find only 20 CPU’s are being used by the SQL Engine, even though you are using Enterprise edition. This is due to the ISO and installer being used not being the Core Edition of SQL Server Enterprise. You need Core Edition to allow more than 20 CPU’s to be used by the database engine.
Another common issue that we have discovered and Gary has documented in his article HammerDB: Avoiding Bottlenecks In Client, is settings in the actual HammerDB client. Specifically around how the client logs data. Check out Gary’s article for the settings you should uncheck before running a test.
Other basic issues we’ve found is the system under test not being sized properly for SQL Server, or the VM’s running SQL Server not being properly aligned to the NUMA nodes and configurations of the hardware being tested. an 8 vCPU VM isn’t optimal if running on a 12 CPU socket, whereas a 6 vCPU VM would be better. We have found that right sizing the VM’s produces far better results. In this case, 8 vCPU’s might produce less transactions per minute and less new orders per minute and higher response times than if 6 vCPU’s was used. On an 18 core socket, 9 vCPU’s or 6 vCPU’s are also good options. Some people find it difficult to use an odd number of vCPU’s, but the scheduler works fine.
Final Word
Benchmarking SQL Server with HammerDB can help determine what performance you might reasonably expect from a platform running different types of databases and show how a platform scales, but it isn’t exactly the same as your real world workloads. The best way of determining how your workloads will perform is by testing them on a the platform with test cases you have valid comparisons for and under similar conditions. Always account for maintenance tasks, such as backups, stats update, reindex etc and allow headroom for growth. I hope the above help you and wish you all the best benchmarking with HammerDB.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster. Copyright © 2012 – 2020 – IT Solutions 2000 Ltd and Michael Webster. All rights reserved. Not to be reproduced for commercial purposes without written permission.