Am I understanding this correctly that dynamic programming == breaking a problem into smaller (reoccurring) sub-problems and using caching to improve performance?
That is conceptually how dynamic programming works, but in practice the way you build the cache is from the bottom up rather than from the top down. It’s a bit like how you can implement computation of the Fibonacci sequence in a top-down manner using a recursive function with caching, but it is a lot more efficient to instead build it in a bottom-up manner.
Am I understanding this correctly that dynamic programming == breaking a problem into smaller (reoccurring) sub-problems and using caching to improve performance?
It’s slightly more nuanced than that, but you’ve got the basic idea.
That is conceptually how dynamic programming works, but in practice the way you build the cache is from the bottom up rather than from the top down. It’s a bit like how you can implement computation of the Fibonacci sequence in a top-down manner using a recursive function with caching, but it is a lot more efficient to instead build it in a bottom-up manner.
The caching is kind of mandatory as the sub-problems interact.