When your database gets corrupted, one of the most important things to do is figure out why that happened, so that you can try to ensure that it doesn't happen again. After all, there's little point in going to a lot of trouble to restore a corrupt database from backup, or in attempting to repair the damage, if it's just going to get corrupted again. However, there are times when root cause analysis must take a back seat to getting your database back on line.
In general, if you have a working Hot Standby to which you can fail over, or if you have a recent backup you can recover, you shouldn't bother trying to recover the corrupted database. Just fail over or restore from backup. The reason for this is that it's very difficult to determine the nature and extent of the damage to your original database. You can run pg_dumpall (or, better, pg_dumpall -g plus pg_dump -Fc in each individual database) to verify that all of your data can be read, but how do you know that none of that data has been corrupted? Also, what if pg_dumpall itself fails? Plus, even if everything looks OK today, it's hard to be sure that there aren't lurking problems that will bite you in the future. For example, pg_dump will happily dump a table with a duplicate value in a supposedly-unique index; you won't necessarily know about the problem until you try to reload the dump. Even if you successfully reload the dump on another machine, it doesn't guarantee that there aren't subtler problems on the original machine, like an all-visible bit set on a data page containing a dead tuple, that won't cause surprising inconsistencies at some possibly quite distant point in the future.
But sometimes you have no choice. Perhaps you don't have a Hot Standby server (you really should!) or the corruption event affects both machines. Perhaps your data changes too quickly to consider restoring from a backup, or you do restore from backup but still want to extract what you can from the original, now corrupt master. In that case, it's a very good idea to use pg_dumpall and/or pg_dump to dump the corrupted database and restore it into a newly-initialized cluster on known-good hardware. The data you extracted from the old cluster may not be intact, but at least by dumping and restoring it you have a good chance of finding any obvious problems, like duplicated "unique" keys, and you also free yourself of any low-level corruption that doesn't show up clearly at the SQL level. You might do something like this:
pg_dumpall -g > pgglobals
for in `psql -t -c 'select datname from pg_database where datallowconn'`; do pg_dump -Fc -f pgdb.$db $db; done
Now, on a really rotten day, you might find that pg_dump doesn't work. You might, for example, get an error like this:
pg_dump: schema with OID 559701082 does not exist
There are a few examples of such error messages in the pgsql-bugs archives, leading one poster to ask "Is there something like a system catalog integrity checker?". To meet that need,EnterpriseDB has just announced the availability of a tool called pg_catcheck, which is exactly that.
pg_catcheck checks all of your system catalogs for references to nonexistent objects. In the above case, the problem is a reference to a schema that does not exist. PostgreSQL uses numeric identifiers called OIDs to identify schemas and many other kinds of SQL objects. In the above example, a table, function, or some other object is identified as being located in the schema with OID 559701082, but in fact no such schema exists. You could issue queries against each system catalog individually, trying to figure out which ones contains a reference to that OID; and then you could issue further queries to figure out exactly which row of that table contains the bad reference. After fixing the issue, you might rerun pg_dump and find that there's another, similar problem; and you won't really know how many more things remain to be fixed until pg_dump finally works. Alternatively, you could run pg_catcheck, and it will tell you about all problems of this type throughout the system.
(For those familiar with the structure of the PostgreSQL system catalogs, a few technical details: pg_catcheck verifies that every oid column that acts like a foreign key points to a matching row on the referring side. It also checks oidvector and oid[] columns, as well as references contained in pg_depend, pg_shdepend, pg_description, pg_shdescription, pg_seclabel, and pg_shseclabel using class OID/object OID notation. And, it catches a few other problems, too, like pg_class.relnatts failing to match the number of rows actually present in pg_attribute. Also, it does all of this using only sequential scans, so that it won't be fooled by corrupt system catalog indexes. You could check all of this by hand, but running pg_catcheck is a lot faster.)
pg_catcheck is not a general corruption detector; for that, you should use initdb -k to create a database cluster with checksums enabled. It also won't provide advice on how to recover from system catalog corruption; for that, you should consult an expert. But it's a useful aid in getting your database back to the point where it can be successfully dumped.