EDB Tutorial: How To Run a Complex Postgres Benchmark Easily - Master TPC-C in 3 Short Steps

April 24, 2023
Elephant Tutorial

Benchmarking is one of the best ways to ensure the ongoing performance of your Postgres database, providing vital insights that can prove incredibly useful when growing and scaling. Luckily there are a range of tools that can help with effective and efficient benchmarking.

This blog post explores whether a complex benchmark like the TPC Benchmark (TM) C (TPC-C) can be easier to run like the much simpler TPC Benchmark (TM) B (TPC-B) with the help of AppImage packaging.

pgbench, a PostgreSQL specific fair-use implementation of the retired TPC-B, is a tool provided with Postgres that people use because it's easy to get, set up and run. Allow me to reproduce some existing information about how to easily use pgbench in 3 short and simple steps when PostgreSQL is already up and running:

 

  1. Create a database pgbench:
$ createdb pgbench

 

  1. Initialize (or load) the database with default sizing parameters:
$ pgbench -i pgbench dropping old tables... creating tables... generating data (client-side)... 100000 of 100000 tuples (100%) done (elapsed 0.01 s, remaining 0.00 s) vacuuming... creating primary keys... done in 0.14 s (drop tables 0.00 s, create tables 0.00 s, client-side generate 0.07 s, vacuum 0.04 s, primary keys 0.03 s).

 

  1. Run a test with default parameters:
$ pgbench pgbench starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 1 query mode: simple number of clients: 1 number of threads: 1 number of transactions per client: 10 number of transactions actually processed: 10/10 latency average = 0.656 ms tps = 1523.668127 (including connections establishing) tps = 2282.622441 (excluding connections establishing)

 

There are a number of fair-use TPC-C implementations out there such as HammerDB, BenchBase, and DBT-2 to name just a few. Can any of these be just as simple to use?

I think so!  An experiment with creating an AppImage might help us take a step in that direction if you have a Linux based system to run the AppImage on, but the database can be on another system, like with pgbench.

Let's see how that works with this DBT-2 AppImage proof-of-concept.

 

  1. Build a database called dbt2:
$ dbt2 pgsql-build-db -w 1 dbt2 [... CREATE DATABASE, CREATE TABLE, COPY, CREATE INDEX, and CREATE FUNCTION command output omitted ...]

 

  1. Run a 120 second (2 minute) test using easy mode:
dbt2 easy -a pgsql -b dbt2 -w 1 -l 120 [... test execution status output omitted ...]

 

  1. Generate a summary of the results (careful, there may be other mix-*.log files in the current directory depending on how many times dbt2 has been run and depending on how many processors are on the system):
$ dbt2 post-process mix-556525.log ============ ===== ========= ========= =========== =========== ===== .. .. Response Time (s) .. .. .. ------------ ----- -------------------- ----------- ----------- ----- Transaction % Average 90th % Total Rollbacks % ============ ===== ========= ========= =========== =========== ===== Delivery 3.94 0.001 0.002 4728 0 0.00 New Order 45.20 0.001 0.002 54215 550 1.01 Order Status 4.03 0.000 0.000 4830 0 0.00 Payment 42.77 0.000 0.000 51307 0 0.00 Stock Level 4.06 0.001 0.001 4874 0 0.00 ============ ===== ========= ========= =========== =========== ===== * Throughput: 27107.50 new-order transactions per minute(NOTPM) * Duration: 2.0 minute(s) * Unknown Errors: 0 * Ramp Up Time: 0.0 minute(s)

Three steps seem to be on par with pgbench, but there is still some room for improvement around how many command line parameters are really needed.

 

Want to try it? Download it here. The documentation is also in progress of being updated and available online.

Share this