It's a pleasure to answer
When doing graph optimization problems by hand there is no excuse for ignoring the obvious lower bound and continuing the branches all the way down. If I remember correctly, for bigger networks in branch-and-bound we normally need both a good heuristic/intuition to take the good decisions very early as well as a good (often non-trivial) lower bound to discard the bad decisions before branching almost all the way down.
Branch and bound is a general solution strategy for any kind of discrete problem, not only for graph-based problems. It is first of all based on a branching rule, and if the branching rule is a good one it also has a bounding rule.
- The branching rule is basically a rule that tells you what to change from a certain situation you have reached in order to get a new situation. After some branching, situation to situation, maybe you reach a situation that is also a solution to the problem. A necessary property of the branching rule is that you can prove that branching eventually stops (and maybe restarts somewhere else from another situation, and on and on, but eventually really stops). Another necessary property of the branching rule is that you can prove that if you branch everything then somewhere in your branchings you will have reached every possible situation. This is to ensure that whatever the optimal solution is, your branching rule is capable of bringing you there.
- The bounding rule is a different thing, and evolves from what the branching rule allows. What do I mean? Sometimes your branching rule has the special property that going from a certain situation, say S, to a branching of S, say b(S), the "value" of b(S) is always less than the "value" of S. By this I mean that you can prove in general that branching always makes your situation worse. If this happens then you can start bounding things.
How does the whole thing happen? Well, you start from some situation, and branch branch branch, until you find a situation that is a solution to the problem. You record the "value" of this solution and call it COV (current optimal value). You stop branching from this solution, because you have proved that branching only makes the "value" worse. You go now to some other situation you haven't fully branched down from, and branch branch branch while checking the values of the situations you find. Every time you reach a value that is worse than COV, you stop branching there, and start branching somewhere you haven't yet. If instead you find a situation that is also a solution, and its value is better than COV, then this becomes your new optimal candidate and you change COV to this new value.
This strategy is the main strategy solvers use to do anything, together with of course smart heuristics and very refined ingredients. The base of it all is this, though.
I had one course in which we used a solver for this kind of approach. It was a very interesting experience, but I often found it a bit annoying that most problems we did were done by "easy" modelling and then relying on the cleverness of the solver.
I see! Modelling can seem easy at first, but it's definitely not so trivial after the initial examples.
There are many problems for which several equivalent algebraic models are known, but only few of these models are fast enough for a solver to actually manage to solve things.
Moreover there sometimes are problems that it is better to describe incompletely on purpose, and only perfect the description after some solving has already happened.
Then there are problems for which changing basis of the space (if you do not know what a basis is, you can imagine it as just the "alphabet" with which you write vectors in a vector space) also changes the solving time by LOTS.
Many things can happen by modelling things correctly (and incorrectly), and it's fun to study modelling in depth, or even specific models, as many semplifications and heuristics often arise there.