@Pierre 303 already said it, but I'll say it again. DO use indexes on combinations of columns. A combined index on (a, b) is only slightly slower for queries on a than an index on a alone, and is massively better if your query combines both columns. Some databases can join indexes on a and b before hitting the table, but this is not nearly as good as having a combined index. When you create a combined index you should put the column that is most likely to be searched first in the combined index.
If your database supports it, DO put indexes on functions that show up in queries rather than columns. (If you're calling a function on a column, indexes on that column are useless.)
If you're using a database with true temporary tables that you can create and destroy on the fly (eg PostgreSQL, MySQL, but not Oracle), then DO create indexes on temporary tables.
If you're using a database that allows it (eg Oracle), DO lock in good query plans. Query optimizers over time will change query plans. They usually improve the plan. But sometimes they make it dramatically worse. You generally won't really notice plan improvements - the query wasn't a bottleneck. But a single bad plan can take down a busy site.
DON'T have indexes on tables you're about to do a large data load on. It is much, much faster to drop indexes, load the data, then rebuild the indexes than to maintain them as you load the table.
DON'T use indexes on queries that have to access more than a small fraction of a large table. (How small depends on hardware. 5% is a decent rule of thumb.) For example, if you have data with names and gender, names are a good candidate for indexing since any given name represents a small fraction of the total rows. It would not be helpful to index on gender since you'll still have to access 50% of the rows. You really want to use a full table scan instead. The reason is that indexes wind up accessing a large file randomly, causing you to need disk seeks. Disk seeks are slow. As a case in point I recently managed to speed up an hour long query that looked like:
SELECT small_table.id, SUM(big_table.some_value)
FROM small_table
JOIN big_table
ON big_table.small_table_id = small_table.id
GROUP BY small_table.id
to under 3 minutes by rewriting it as follows:
SELECT small_table.id, big_table_summary.summed_value
FROM small_table
JOIN (
SELECT small_table_id, SUM(some_value) as summed_value
FROM big_table
GROUP BY small_table_id
) big_table_summary
ON big_table_summary.small_table_id = small_table.id
which forced the database to understand that it shouldn't attempt to use the tempting index on big_table.small_table_id. (A good database, such as Oracle, should figure that out on its own. This query was running on MySQL.)
Update: Here is an explanation of the disk seek point that I made. An index gives a quick lookup to say where the data is in the table. This is usually a win since you will only look at the data you need to look at. But not always, particularly if you will eventually look at a lot of data. Disks stream data well, but make lookups slow. A random lookup to data on disk takes 1/200th of a second. The slow version of the query wound up doing something like 600,000 of those and took close to an hour. (It did more lookups than that, but caching caught some of those.) By contrast the fast version knew it had to read everything and streamed data at something like 70 MB/second. It got through an 11 GB table in under 3 minutes.