Kevin Closson’s Silly Little Oracle Benchmark, aka SLOB, is a great free tool to test the IO capability of an OLTP type system based on small 8KB IO, with varying IO patterns (update percentages). It uses the database engine to generate the IO, to provide an understanding of the possible IO capability of the platform, and what the database sees from the underlying infrastructure. By using AWR reports you can measure the IO throughput and latency of the database after each test and use that as a comparison when making changes. The purpose of using SLOB is to test the underlying infrastructure, not to test the database software itself. This article will cover example SLOB and Guest OS configurations used to test two different versions of Nutanix AOS software in an all flash cluster. The only changes between the sets of tests was the Nutanix AOS software to demonstrate the difference in performance that may be achieved by a simple one click upgrade from AOS 4.7 to AOS 5.0. SLOB is used as it is an easy way to set up a repeatable and measurable test.
I had first written about using SLOB in my article All Flash Performance on Web Scale Infrastructure. The tests there were based on using a single 2 Node Oracle RAC Cluster to drive load, whereas the test I have done for this article are using multiple Oracle Database VM’s and scaling out across multiple servers. The sets of tests demonstrate the linearity of performance scaling available in the Nutanix platform, which is very predictable. As you add more resources (nodes and VM’s), you get more performance.
There were 4 or 8 Oracle database single instances VMs. The VMs were installed with RHEL 7.2 x86_64 with 12 vCPU and 32GB of memory.
Disk Configuration for Each VM
- 1 x 100GB vDisk for Linux Operating System
- 1 x 10GB vDisk for ASM diskgroup dedicated for ASM SPFILE (OCR)
- 16 x 125GB vDisks for ASM DATADG diskgroup dedicated for Oracle database datafiles and online redo logs
- Linux UDEV was used for disk persistency, i.e. no ASMLIB
- When using VMware, Virtual Disks (VMDK’s) should be split across 4 x PVSCSI Controllers, Hyper-V should use SCSI Controllers on Generation 2 VM’s and AHV is by default using SCSI directly to the guest OS
Each VM was installed with Oracle Grid Infrastructure 12.1.0.2 and Oracle database software 12.1.0.2. Below is the initialization parameter was used for the SLOB database.
*._db_block_prefetch_limit = 0
*._db_block_prefetch_quota = 0 *._db_file_noncontig_mblock_read_count = 0 *.audit_file_dest=’/u01/app/base/admin/SLOB1DB/adump’ *.audit_trail=’db’ *.compatible=’12.1.0.2.0′ *.control_files=’+DATADG/slob1db/controlfile/current.256.915710191′ *.db_block_size=8192 *.db_create_file_dest=’+DATADG’ *.db_domain=” *.db_name=’SLOB1DB’ *.diagnostic_dest=’/u01/app/base’ *.dispatchers='(PROTOCOL=TCP) (SERVICE=SLOB1DBXDB)’ *.filesystemio_options=’setall’ *.db_files=2000 *.processes=8000 *.shared_pool_size=4G *.db_cache_size=1536M *.parallel_max_servers=0 *.log_buffer=134217728 *.pga_aggregate_target=10G *.remote_login_passwordfile=’EXCLUSIVE’ *.undo_tablespace=’UNDOTBS1′ *.local_listener=’LISTENER_SLOB1DB’ |
Here is the /etc/sysctl.conf was used during this test
# oracle-rdbms-server-12cR1-preinstall setting for fs.file-max is 6815744
fs.file-max = 6815744 # # oracle-rdbms-server-12cR1-preinstall setting for kernel.sem is ‘250 32000 100 128’ kernel.sem = 250 32000 100 128 # # oracle-rdbms-server-12cR1-preinstall setting for kernel.shmmni is 4096 kernel.shmmni = 4096 # # oracle-rdbms-server-12cR1-preinstall setting for kernel.shmall is 1073741824 on x86_64 kernel.shmall = 1073741824 # # oracle-rdbms-server-12cR1-preinstall setting for kernel.shmmax is 4398046511104 on x86_64 kernel.shmmax = 4398046511104 # # oracle-rdbms-server-12cR1-preinstall setting for kernel.panic_on_oops is 1 per Orabug 19212317 kernel.panic_on_oops = 1 # # oracle-rdbms-server-12cR1-preinstall setting for net.core.rmem_default is 262144 net.core.rmem_default = 262144 # # oracle-rdbms-server-12cR1-preinstall setting for net.core.rmem_max is 4194304 net.core.rmem_max = 4194304 # # oracle-rdbms-server-12cR1-preinstall setting for net.core.wmem_default is 262144 net.core.wmem_default = 262144 # # oracle-rdbms-server-12cR1-preinstall setting for net.core.wmem_max is 1048576 net.core.wmem_max = 1048576 # # oracle-rdbms-server-12cR1-preinstall setting for net.ipv4.conf.all.rp_filter is 2 net.ipv4.conf.all.rp_filter = 2 # # oracle-rdbms-server-12cR1-preinstall setting for net.ipv4.conf.default.rp_filter is 2 net.ipv4.conf.default.rp_filter = 2 # # oracle-rdbms-server-12cR1-preinstall setting for fs.aio-max-nr is 1048576 fs.aio-max-nr = 1048576 # # oracle-rdbms-server-12cR1-preinstall setting for net.ipv4.ip_local_port_range is 9000 65500 net.ipv4.ip_local_port_range = 9000 65500 vm.swappiness = 0 |
SLOB Configuration:
The configuration parameters are standard as per the SLOB documentation and README files, but some settings are modified based on the type of test you want to run. The following settings are in slob.conf.
UPDATE_PCT: 30
I used 00, 30, 50, or 100 during various tests based on the percentage of updates required. Note that using 100 as the update percent produces a roughly 50% random write workload. The graphs below are with this set to 30
RUN_TIME: 7200
7200 seconds is 2 hours runtime, enough to get a result under a sustained workload.
THREADS_PER_SCHEMA=1
I used 1 or 2, 1 was used during most runs, I experimented with 2 with higher update percentages to drive more load. The graphs below are from tests with this set to 1.
None of the other settings need to be modified.
To execute a test on each VM using 24 vUsers you run ./runit.sh 24 from the SLOB directory.
Now lets look at some results, which I have graphed for you. The first test is with 4 VM’s on 4 Nodes, followed by 8 VM’s on 8 Nodes. This shows the increase in performance when scaling a configuration. During the test all data reduction features are enabled. Data Checksums are always enabled and can’t be disabled, unlike some competing platforms. All the VM’s and storage used in these tests take up just 4 rack units. The SSD’s used in the All Flash Nutanix nodes are standard Intel S3610 SATA-SSD’s, no NVMe or anything exotic is used. When NVMe platforms are available we will repeat the tests and share the results.
Nutanix AOS 4.7 Performance with Oracle SLOB:
Nutanix AOS 5.0 Performance with Oracle SLOB:
The only difference between the above tests was the version of Nutanix AOS software running on the cluster. No hypervisor version upgrade is required between tests to achieve better performance. Average read and log file write latency were fairly good and fairly linear. CPU utilization across the nodes was ~ 60% during the tests and more performance was available, so these test do not represent peak performance by any means. Scaling up to 2 VM’s per node almost doubled IO performance, at the cost of some additional latency as workload increases. The hardware used during the test had the Intel Xeon v3 processors, which are Haswell, the latest generation is v4 Broadwell. If the same tests were repeated on Broadwell it would be expected to show approximately a 15% improvement in performance. Software defined storage benefits not only from software improvements but also improvements in system and CPU performance improvements. Which means you can have continuing performance improvements at a much faster rate than on traditional infrastructures.
I decided to record similar images from my previous article from Oracle Enterprise Manager Express while testing against a 4 Node Oracle RAC cluster using SLOB, but this time a 100% read test (UPDATE_PCT=00). Each RAC Node VM was configured with 24 vCPU and 32GB RAM. The results were as follows:
Instead of using standard virtual disks with the Oracle RAC Nodes in these tests I used Nutanix Acropolis Block Services and in-guest iSCSI initiator to connect to 40 iSCSI LUN’s that are distributed and load balanced across the Nutanix cluster. By doing this it allows a small number of VM’s, in this case 4 x Oracle RAC Nodes, to benefit from a larger number of storage controllers and therefore increased overall performance. There is a cost to latency however due to the partial loss of data locality for the transactions that need to go over the network. Acropolis Block Services allows either VM’s or External Physical OS (Oracle Linux, Oracle VM, Windows and other Linux variants) to benefit from the Nutanix environment.
Final Word
As with a lot of tests these were conducted in a controlled environment that isn’t subject to the random noise of a production environment, and while nothing else was consuming resources of the systems used in the test. As a result your milage in real world environments will be different. However under the same conditions, with the same configuration, you should be able to reproduce the same or similar results. There are more factors involved in selecting a platform than just performance. These results compare very favorably to other published HCI SLOB results, especially on latency, which is significantly lower. I partnered with Mellanox on the switching hardware for this environment. Their switches produce predictable latency and performance across a variety of message sizes. You can read more about Mellanox switching solutions for Nutanix environments here. For general performance information about Nutanix please see Raising the Bar and Pushing the Envelope on Performance.
This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com. By Michael Webster +. Copyright © 2012 – 2016 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.
[…] via Nutanix Performance with Oracle SLOB on All Flash Nodes — Long White Virtual Clouds […]
[…] Database VM’s on a defined number of servers you might draw a graph like the following (from Nutanix Performance with Oracle SLOB on All Flash Nodes) […]