Using Randomness as a Tool

    There’s a concept in search algorithms that makes a good analogy for life. As analogy, imagine a large forested area with small hills and depressions in it. You have the task of finding the highest point in this area. One popular algorithm for finding a maximum like this is to look around from where you are, through the trees, and head uphill. If you perform this task enough times, you will end up in a spot where there are no higher points in view. This is a local maximum, and possibly the highest point in the area you are searching. In machine learning, this task is referred to as a gradient descent (technically descent means you’re finding the minimum, not the maximum, but I digress.).
    One problem with this search method is that it only allows one to find the local extremes. If there was another higher point in our imaginary park, you wouldn’t be guaranteed to find it — suppose the lay of the land required you to descend after your initial ascent before going back up. You find the highest point nearby, but not necessarily the highest possible point in your search area — the absolute maximum.
    This analogy carries across to other optimization problems. Frequently the best way to optimize a process or decision is to start where one is, improve slightly, test that result, improve again, and so on. This leads to improvement, but precludes improvements that require steps in a wrong direction in order to enable more steps in a right direction.
    So how do we avoid this stagnation and make sure that the way we are doing things is the best way and not just a local maximum?
    We can look to the gradient descent search problem again for some guidance. One way of improving the results of this outcome is to run the process several times, but each time randomize the starting location. If you randomize where you start, and you keep finding yourself back in the same location as your maximum, you can be reasonably confident that you have found the absolute maximum.
    If we apply this analogy to the outside world, we can create this randomness by finding alternative perspectives on a given problem. This can come from coworkers with diverse backgrounds, books, blog posts, consultants, friends, etc. A team of very similar people working in isolation on a problem is much more likely to get stuck in one way of thinking. Given the similar starting place, the best they come up with will likely be the same place — even in a world where they have rationally sought continuous improvement. Add even a single person with a different starting approach, and a team who values this person’s opinion, and the team will have an opportunity to seek another maximum for comparison, or, at the very least, to confirm that their original outcome was a good one.
The lessons in here are:
  • Seek diverse backgrounds, thoughts and opinions from whatever sources you can and integrate them into your approach
  • If you’re feeling stuck, try something wildly different, then go back to iterative stepping (caveat: consider my 95/40 rule post)
  • Try new things some times, you never know when you might be inadvertently stuck in a local maximum

Leave a comment