A lot of similarities are equivalent to, or related to, a dot product in an n-dimensional space, even when the similarity is not explicitly calculated as such. In these cases, and possibly others, it is likely that high values of a.b and b.c imply high values of a.c, but the bound for this isn't very good - not as good as I thought it was at first.
With only three vectors involved - a, b, and c I think that you can draw a 3-dimensional diagram regardless of the dimensionality of the underlying space, and I think the worst case is where all three vectors are in a plane, with a above b and c below b. In that case, e.g. for all being unit vectors and a.b = b.c = 0.9, a is about 25 degrees above b and c is about 25 degrees below it, and a.c = 0.62. In fact for a.c = b.c = x in this case, a.c = 2x^2 - 1.
In these circumstances, if I absolutely had to solve this particular problem, I would try backtracking searches to enumerate sets of nodes very close to a particular node. You could, for instance, start off with the two most similar nodes, and then run a search that, at each level, added the node not yet tried that was closest to one of the original seed nodes. Or you could build a single link clustering and check out all of its subtrees of the required size.