A meeting with a prospective client yesterday, the subject of an inefficient query that joined a large number or normalised tables came up. I don't know how many times I have seen this problem over the last few years - quite a lot at any rate. You particularly want to stay out of the zone where the genetic query optimiser kicks in, if possible. Over and over I have found that setting up a denormalised materialised view works really well for this.
Postgres doesn't have builtin support for materialised views (yet). But they are pretty easy to set up. First you need your actual non-materialised view, which you set up in the usual way, plus a table that you populate from the view:
create view foo as select ....;
create table foo_mat_view as select * from foo;
Add indexes to taste.
Now you need to decide how to keep the data current. One very simple way is just to refresh it lazily every so often (say from a cron job):
truncate foo_mat_view; insert into foo_mat_view select * from foo;
If it needs to be kept up to date greedily, then another way to go is to use triggers. Depending on how complex the view is this can be tricky to work out, but in most cases I have found it can be done quite reasonably. I have seen a few cases where it led to an unacceptable performance hit, but in general I have found that triggers work quite well for this purpose.
This is one of the most productive approaches I know if to performance problems of large joins. It's important to use EXPLAIN ANALYSE as an aid in deciding what goes in your matrialised view. It won't work if you just collapse a bunch of joins that are cheap anyway. Choose an expensive set for it and you will find your rewritten queries fly.