I previously suggested that we might be able to get parallel sequential scan committed to PostgreSQL 9.5. That did not happen. However, I'm pleased to report that I've just committed the first version of parallel sequential scan to PostgreSQL's master branch, with a view toward having it included in the upcoming PostgreSQL 9.6 release.
Parallel query for PostgreSQL - for which this is the first step - has been a long-time dream of mine, and I have been working on it for several years, starting really in the PostgreSQL 9.4 release cycle, where I added dynamic background workers and dynamic shared memory; continuing through the PostgreSQL 9.5 release cycle, where I put in place a great deal of additional fundamental infrastructure for parallelism; and most recently today's commits. I'd like to tell you a little bit about today's commits, and what comes next.
But first, I'd like to give credit where credit is due. First, Amit Kapila has been a tremendous help in completing this project. Both Amit and I wrote large amounts of code that ended up being part of this feature, and that code is spread across many commits over the last several years. Both of us also write large amounts of code that did not end up being part of what got committed. Second, I'd like to thank Noah Misch, who helped me very much in the early stages of this project, when I was trying to get my heads around the problems that needed to be solved. Third, I'd like to thank the entire PostgreSQL community and in particular all of the people who helped review and test patches, suggested improvements, and in many other ways made this possible.
Just as importantly, however, I'd like to thank EnterpriseDB. Without the support of EnterpriseDB management, first Tom Kincaid and more recently Marc Linster, among others, it would not have been possible for me to devote the amount of my time and Amit's time to this project that was necessary to make it a success. Equally, without the support of my team at EnterpriseDB, who have patiently covered for me in many ways whenever I was too busy with this work to handle other issues, this project could not have gotten done. Thanks to all.
OK, time for a demo:
rhaas=# \timing
Timing is on.
rhaas=# select * from pgbench_accounts where filler like '%a%';
aid | bid | abalance | filler
-----+-----+----------+--------
Time: 743.061 ms
rhaas=# set max_parallel_degree = 4;
SET
Time: 0.270 ms
rhaas=# select * from pgbench_accounts where filler like '%a%';
aid | bid | abalance | filler
-----+-----+----------+--------
(0 rows)
Time: 213.412 ms
Here's how the plan looks:
rhaas=# explain (costs off) select * from pgbench_accounts where filler like '%a%';
QUERY PLAN
---------------------------------------------
Gather
Number of Workers: 4
-> Parallel Seq Scan on pgbench_accounts
Filter: (filler ~~ '%a%'::text)
(4 rows)
The Gather node launches a number of workers, and those workers all execute the subplan in parallel. Because the subplan is a Parallel Seq Scan rather than an ordinary Seq Scan, the workers coordinate with each other so that each block in the relation is scanned just once. Each worker therefore produces on a subset of the final result set, and the Gather node collects all of those results.
One rather enormous limitation of the current feature is that we only generate Gather nodes immediately on top of Parallel Seq Scan nodes. This means that this feature doesn't currently work for inheritance hierarchies (which are used to implement partitioned tables) because there would be an Append node in between. Nor is it possible to push a join down into the workers at present. The executor infrastructure is capable of running plans of either type, but the planner is currently too stupid to generate them. This is something I'm hoping to fix before we run out of time in the 9.6 release cycle, but we'll see how that goes. With things as they are, about the only case that benefits from this feature is a sequential scan of a table that cannot be index-accelerated but can be made faster by having multiple workers test the filter condition in parallel. Pushing joins beneath the Gather node would make this much more widely applicable.
Also, my experience so far is that adding a few workers tends to help a lot, but the benefits do not scale very well to a large number of workers. More investigation is needed to figure out why this is happening and how to improve on it. As you can see, even a few workers can improve performance quite a bit, so this isn't quite so critical to address as the previous limitation. However, it would be nice to improve it as much as we can; CPU counts are growing all the time!
Finally, I'd like to note that there are still a number of loose ends that need to be tied up before we can really call this feature, even in basic form, totally done. There are, likely, also bugs. Testing is very much appreciated, and please report issues you find to pgsql-hackers (at) postgresql.org. Thanks.
This post originally appeared on Robert Haas' personal blog.