- Not Magic, Just Math
- Posts
- NMJM #4 Optimization pt. 1
NMJM #4 Optimization pt. 1
🧙♂️ Hello Fellow Magicians!
This week, let’s continue our exploration of ML Typologies by diving into Optimization. Specifically, I want to share how I make use of the optimization typology in practice. 🚀

Optimization ⚙️📈
Optimization problems are all about achieving some objective. That’s why I usually use the tools and approaches of optimization when someone is trying to find the "best" option for something. In my work, I’ve approached optimization in a few different ways:
Reinforcement Learning 🤖
Mathematical Optimization 🧮
Monte-Carlo Simulation 🎲
Each method has pros and cons, which can help determine the best approach for a given problem. Let’s break them down! 📊
But first - I’m excited to welcome our first sponsor of the newsletter - 1440! Here at NMJM we approach AI, ML, and Data Science with a clear lens, bringing no fluff just our honest experiences. The 1440 newsletter approaches news similarly - please check it out by following the “join for free today” link! It helps keep this newsletter free too!
Seeking impartial news? Meet 1440.
Every day, 3.5 million readers turn to 1440 for their factual news. We sift through 100+ sources to bring you a complete summary of politics, global events, business, and culture, all in a brief 5-minute email. Enjoy an impartial news experience.
Reinforcement Learning (RL) 🤖💡
Pros ✅:
Explorative Learning: RL doesn’t require a vast, pre-existing dataset of examples to learn from. The system learns by trial and error as it interacts with the environment, making it well-suited for scenarios where you want to explore multiple paths or strategies.
No Complete System Knowledge Needed: As a developer, you don’t have to know every detail of the system you're optimizing. RL can work even if you lack deep insights into how each action affects the environment—perfect for complex, dynamic problems.
Cons ⚠️:
MDP Formulation Required: To use RL, your problem must be formulated as a Markov Decision Process (MDP). This can be tricky, especially if your state or action space is not well-defined or is too large.
Reward Function Challenges: Designing a good reward function is critical to RL success. If your reward doesn’t properly align with your desired outcome, the system may learn unexpected (or unwanted) behaviors.
Explore vs. Exploit Trade-Off: RL must balance exploring new actions (to find better strategies) vs. exploiting known strategies (to maximize reward). Finding the right balance is a challenge and can lead to slower learning or suboptimal solutions.
Product using RL:
Mathematical Optimization 🧮📐
Mathematical optimization involves finding the best solution from a set of possible solutions given a mathematical formulation. Commonly used methods include Linear Programming, Integer Programming, and Quadratic Programming.
Pros ✅:
Optimal Solutions: Mathematical optimization often guarantees optimality (or near-optimality) of solutions under given constraints, making it a great choice for problems where precision is critical.
Deterministic Outcomes: Solutions are derived based on exact algorithms or formulas, meaning that under the same conditions, the solution is always the same. This makes the process highly reliable and repeatable.
Highly Interpretable: Solutions and their constraints are easy to interpret and validate, which is particularly useful in fields like operations research and logistics.
Cons ⚠️:
Scalability Issues: As the problem size grows, solving the optimization model can become computationally intensive. For large, complex systems, the process can be slow or even infeasible.
Requires Well-Defined Constraints and Objectives: Mathematical optimization relies on clear constraints and objective functions. If the problem is not easily representable as equations or inequalities, this approach may struggle.
Limited Flexibility in Uncertainty Handling: If there is a high level of uncertainty or non-linearity in your problem, mathematical optimization may not be the best fit without additional techniques like stochastic optimization.
Product using Mathematical Optimization:
Monte-Carlo Simulation 🎲🎲
Monte-Carlo Simulation involves using random sampling and statistical modeling to estimate the probabilities of different outcomes. It's often used for risk analysis, financial modeling, and systems with inherent randomness.
Pros ✅:
Great for Uncertainty: If you’re dealing with problems that involve uncertainty or randomness in outcomes, Monte-Carlo simulation is a natural fit since it models a wide range of possible scenarios.
Versatile and Flexible: Monte-Carlo is flexible across different types of problems, from predicting the weather to optimizing a stock portfolio, making it a robust option when your system has many unknowns.
Easy to Implement: The technique itself is fairly straightforward to implement and is often used when mathematical solutions are too complex or unavailable.
Cons ⚠️:
Computationally Expensive: Monte-Carlo simulations require a large number of iterations to ensure accuracy, which can lead to heavy computational loads. If you're simulating millions of scenarios, you’ll need significant computing power.
Approximate Solutions Only: Unlike mathematical optimization, Monte-Carlo provides estimates, not exact solutions. The accuracy depends on the number of simulations run, and this can sometimes lead to suboptimal or noisy solutions.
Sensitive to Input Variability: The results are highly dependent on the accuracy of input data and assumptions. Small errors in input assumptions can lead to big differences in the simulated outcomes.
Each method has its strengths and challenges. Selecting the right one often depends on the nature of your problem, your system constraints, and the desired precision of your solution. 🚀📈
Next week, we’ll dig into the first of our optimization toolboxes and begin to explore the practicalities of developing an RL-based solution. 🤖🛠️
Have you used any of these optimization methods in your work? What challenges have you faced, or what successes have you found?
Reply