Troubleshooting Stuck VACUUMs

March 01, 2011

Over the years, I've occasionally encountered situations where VACUUM fails to make progress, and not fully understood why that was happening.  Recently, I've come to a better understanding of how lock conflicts can result in VACUUM stalling out either at the beginning of the table, or part-way through.  (If you're not already familiar with the types of locks that PostgreSQL uses, you may find it helpful to read through my earlier blog post on locking in PostgreSQL before reading on.)


Troubleshooting

As I currently understand it, there are basically two ways that VACUUM can get stuck.  It can block while attempting to acquire a heavyweight lock on the table, or it can block while attempting to obtain a cleanup lock on some block within the table.  If it blocks attempting to obtain a heavyweight lock on the table, it will show up in pg_locks as waiting for a ShareUpdateExclusiveLock on on the table, and pg_locks can also tell you who else has a lock on the table.  Typically, you'll find that someone else is performing a VACUUM, CLUSTER, or ALTER TABLE operation on the affected table, or that they've manually locked it using LOCK TABLE.  Once that operation completes (or you kill it), things should get back to normal.

If VACUUM blocks while attempting to obtain a cleanup lock on one of the table blocks, you won't see an entry in pg_locks, but it won't do any I/O, won't consume any CPU time, and the process will show up as sleeping if you check it using an operating-system level tool such as top.  Unlike the first type of blocking, this can happen at any point during the VACUUM, not just at the beginning.  In this case, the activity that is blocking VACUUM is some other query that is accessing the requested block, preventing VACUUM from cleaning it up.  A particularly likely culprit is a query that has executed partway through and then stopped, such as one being executed using a cursor.

Fortunately, this case is also fairly easy to troubleshoot.  pg_locks won't directly show you who is using the block in question, but because anyone who is accessing the table must take some kind of lock on it, you can still use pg_locks to find out which other backends might be causing the problem, and then look at pg_stat_activity, which shows what every backend in the system is doing, to narrow it down further.  Queries that start, run, and end within a reasonably short period of time shouldn't worry you, but any query that sticks around for a long time without finishing might be the culprit.

Why It Matters

Of course, when VACUUM gets stuck on a particular table, then the VACUUM of that table does not complete; and since only one process can VACUUM a table at a time, starting a second VACUUM on the same table won't help.  Since VACUUM is an important part of routine maintenance which is absolutely essential for good performance, this is not a good thing.

A further problem is that there is a limit on the number of autovacuum workers that can exist at any given time.  If you run VACUUM manually, this limit does not apply, but most people rely on the autovacuum daemon to arrange for vacuuming as necessary.  When an autovacuum worker gets stuck, it means not only that the table the autovacuum worker was trying to process doesn't get vacuumed, but also that that autovacuum worker isn't available to vacuum anything else, either.   Pinning down an autovacuum worker can make the system unable to keep up with the overall amount of vacuuming required to keep the system in good working order.


Although it requires a fairly unusual combination of circumstances, a stuck VACUUM can even lead to an undetected deadlock situation, as Tom Lane recently pointed out.  If a process that has a cursor open on a table attempts to take an exclusive lock on that table while it's being vacuumed, the exclusive lock request and the vacuum will both block until one of them is manually cancelled.

What Can We Do About It?

The best advice for users of existing versions of PostgreSQL is to avoid leaving transactions or cursors open for long periods of time.  This is good advice anyway, for other reasons: open transactions, particularly those which are running at the repeatable read or serializable isolation levels or which have open cursors, can cause a variety of problems for PostgreSQL.  Old row versions must be retained by the system in case the query needs them, so the performance consequences can be quite bad even apart from the possibility of blocking vacuum.  Databases are designed to handle transactions that begin, do some work, and end; PostgreSQL is not unique in having difficulty with transactions that drag on and on.

In PostgreSQL 9.1, we have already made one improvement that should help reduce the scope of this problem.  When an autovacuum worker attempts to obtain a heavyweight lock on the table and finds that a conflicting lock is already present, it will just skip the table.  The next round of processing by the autovacuum daemon (normally, one minute later) will try again, and we'll keep trying until we manage to obtain the lock.  As an exception, however, a VACUUM that is required to prevent transaction ID wraparound will still wait for the lock.

It seems that we could improve the other scenario as well: if VACUUM is unable to obtain a cleanup lock on some particular block, it could just skip the block (except when transaction ID wraparound is an issue).  This would have the effect of possibly leaving behind a few dead rows in the affected block, but that seems like a small price to pay to avoid a stuck VACUUM process.  This has not been implemented yet, however.

Share this

More Blogs