A flexible Monte Carlo simulator that uses algebraic expressions to define and connect simulations. A recent update has added the ability to use any function as the basis of a Monte Carlo simulation. All of the previous parameter types are supported by the new simulation class. Additional documentation will be added soon once this new feature is more fleshed out.
The original inspiration for this project came from the Math.NET Symbolics computer algebra library. This library provides a way to evaluate formulas written in a standard format. For example, you can provide a string such as "1/(1 + a)", specify that the value of "a" is 5, and evaluate the string the obtain a result of ~0.167.
The purpose of my library is to extend this functionality by allowing the value of "a" to be generated using a variety of methods. The goal is to provide an easy, flexible way to simulate different types of events using the same building blocks. As long as the outcome can be expressed with an algebraic expression, it can be evaluated by generating values for the underlying parameters. Furthermore, simulations can also be used as parameters for other simulations, essentially allowing you to construct a system of equations to define complex problems.
- Standard Simulation
- Dependent Simulation
- Sensitivity Simulation
- Exhaustive Sensitivity Simulation
- Qualitative Simulation
- Binomial Tree Simulation
- Binomial Option Pricing Simulation
This serves as the building block for most other types of simulations. To create a simulation, you must provide an expression and a collection of parameters. For example:
string expression = "twentysided";
IParameter[] parameters = new IParameter[]
{
new DistributionParameter("twentysided", new DiscreteUniform(1, 20))
};
Simulation simulation = new Simulation(expression, parameters);
The above is a simple example of how expressions and parameters interact to create a simulation. The expression tells the simulation that we wish to evaluate the result of one variable named "twentysided". For every variable in the expression, we must define how the value of that variable will be determined. In this case, we have defined the "twentysided" variable to be a discrete uniform distribution from 1 to 20 -- essentially, we are going to roll a twenty-sided die and return the value.
Simulations can become far more complicated. Let's take a look at something that is a bit more useful:
string expression = "Rf + B * (Rm - Rf)";
IParameter[] parameters = new IParameter[]
{
new DistributionParameter("B", new Normal(1, 0.1)),
new ConstantParameter("Rf", 0.02),
new DistributionParameter("Rm", new Normal(0.08, 0.035))
};
Simulation simulation = new Simulation(expression, parameters);
The expression here represents the Capital Asset Pricing Model (CAPM). The inputs are the risk-free rate (Rm), Beta (B), and the expected market return (Rm). For this simulation, we have decided to hold our risk-free rate constant at 2% and fluctuate our Beta and market return using normal distributions. Presumably, we would do something like this if we were not sure what the exact values should be, but wanted to get an idea of what the range of expected returns looks like for our parameter estimates. A simulation like this would be run multiple times to generate a result set containing summary statistics such as min, max, mean, median, standard deviation, etc..
The types of parameters that can be used in a simulation will be detailed below. The test project also serves as a good example of how to create simulations and use the various parameters. However, I will highlight one more feature in this overview -- the ability to include simulations as parameters for other simulations. First, I will show how this is represented conceptually with expressions:
x + y + z
x = 10 * a + b
y = (c - d) / 5
z = ln(n^(1/2))
The ultimate expression we wish to evaluate is x + y + z. As shown by the full equations, x, y, and z are all stand-alone expressions that need to be evaluated in order to produce a value from our final expression. Instead of writing one complex expression, we can use multiple simulations to break it into pieces that can be evaluated indepedently. One thing that is not obvious from this example is that the return value produce by a simulation parameter does not have to be the number generated by evaluating of the formula, but can be a summary statistic instead. In this case, we could have defined a and b in equation x to be distribution parameters, run simulation x 100 times per evaluation of the main expression, and taken the maximum value from each set of 100 inner simulations. Perhaps I will replace this with a less arbitrary example in the future, but the point I am trying to convey is that being able to use simulations inside of simulations is useful both for breaking complex problems into smaller pieces and for providing enhanced functionality that would be difficult to replicate otherwise. This enhanced functionality will be shown in more detail in the sensitivity simulation section.
A dependent simulation is a special type of simulation that occurs over n numbers of periods, where each periods value depends on the value of the previous period. The thought process behind this, and the example that I am about to show, is providing a way to generate a random walk, such as the daily price of a stock over the course of one year.
string expression = "value * (1 + dailyChange)";
DistributionParameter normal = new DistributionParameter("dailyChange", new Normal(0.01, 0.0025));
DependentSimulation simulation = new DependentSimulation(100, expression, normal);
There are a few key things to note here. The expression passed to the simulation constructor must contain a variable named "value". Additionally, only one parameter can be provided in the constructor. This parameter is called the "change parameter" and should be used in the expression to determine how the value will change from period to period. The "value" variable will be replaced each period with the result of evaluating the expression for the previous period. I may consider allowing multiple change parameter inputs in the future, but keep in mind that the change parameter can be a simulation parameter with an unlimited number of other parameters, so the constraint is generally not too much of a limiting factor.
This type of simulation is designed to assess the impact of changing one or more variables while holding the rest of the expression constant (constant as in the method in which the values are generated does not change, even though the actual values can still fluctuate). To best illustrate this, I will provide an alternative example of the CAPM expression discussed earlier.
string expression = "Rf + B * (Rm - Rf)";
IParameter[] parameters = new IParameter[]
{
new PrecomputedParameter("B", new FloatingPoint[] { 0.5, 1.0, 1.5 }),
new DistributionParameter("Rm", new Normal(0.08, 0.035)),
new PrecomputedParameter("Rf", new FloatingPoint[] { 0.015, 0.02, 0.025 })
}
Simulation simulation = new Simulation(expression, parameters);
SensitivitySimulation sensitivity = new SensitivitySimulation(simulation);
There are two major differences from above. First, the parameters for B and Rf are using a type of parameter called a PrecomputedParameter. In order to construct a sensitivity simulation, one more more PrecomputedParameters must be used. Second, the SensivitySimulation is constructed by providing it with a simulation that was created with the PrecomputedParameters.
The simulation will be performed as follows: PrecomputedParameters will be divided into sets, corresponding to the order in which the values were provided. In this case, we would have three sets of parameters for B and Rf, (0.5, 0.015), (1.0, 0.02), and (1.5, 0.025). For each set of parameters, n number of simulations will be performed based on the expression provided, using the PrecomputedParameter values for every simulation and generating a new Rm value each time from the normal distribution. The results of the sensitivity simulation will be a collection of results from the underlying simulations and summary statistics identifying which combination of precomputed values produced the best and worst possible outcomes.
This is very similar to the sensitivity simulation described above, but with one caveat. In the prior example, sets of PrecomputedParameters were generated based on the order in which the values were provided. For the exhaustive variant of the simulation, all possible combinations of the PrecomputedParameters will be tested. This means that the above example would return 9 result sets instead of three. Obviously, this can lead to a drastic increase in computation time, so this has been broken into a seperate class so the user can choose when to use this version. Additionally, the exhaustive sensitivity simulation provides a multi-threaded version of the Simulate function to speed up computation when resources are available. Neither version of the sensivity simulation can be used as a simulation parameter, although it seems like it would be feasible to implement if there was a need for this in the future.
Qualitative simulations are designed to provide a non-numeric result to a quantitative simulation. A better way to phrase this might be to say that this type of simulation is meant to represent a decision or classification made based on the observation of the underlying data. To illustrate this, I will use what I hope is a more interesting example than some of the ones I have previously provided.
QualitativeParameter param = new QualitativeParameter("arrowkey",
new QualitativeOutcome("left", 0.25),
new QualitativeOutcome("right", 0.25),
new QualitativeOutcome("up", 0.25),
new QualitativeOutcome("down", 0.25));
QualitativeSimulation simulation = new QualitativeSimulation(param);
QualitativeSimulationResults results = simulation.Simulate(_numberOfSimulations);
Imagine you are creating a computer game where the AI opponent will randomly move in one direction each turn. To represent this movement, I have defined a set of outcomes called "arrowkey" where the AI is equally likely to move in any direction. When the Simulate function is called, a result set will be returned that contains the next n moves that the AI will make. The outcomes can be defined as anything you would like, so long as the cumulative probability of all outcomes adds up to 100%.
While the above example serves as a decision, qualitative simulations can also produce classifications. For example:
QualitativeConditionalParameter param = new QualitativeConditionalParameter(
"upordown",
new DistributionParameter("normal", new Normal(1, 0.1)),
"flat",
new QualitativeConditionalOutcome(ComparisonOperator.GreaterThan, 1, "up"),
new QualitativeConditionalOutcome(ComparisonOperator.LessThan, 1, "down"));
QualitativeSimulation simulation = new QualitativeSimulation(param);
QualitativeSimulationResults results = simulation.Simulate(_numberOfSimulations);
The above simulation will generate n values from a normal distribution and classify each value as "up", "down", or "flat" depending on whether how the observed value compares to the conditional outcome value (1, in this case). The above simulation could be used to answer the questions, "what percentage of the time do I expected the stock market to rise or fall on a given day?".
Similar to the first example, qualitative simulations can be also be used to simulate the occurence of discrete events without having to explicity define the probability of each outcome.
QualitativeRandomBagParameter bag = new QualitativeRandomBagParameter("bag", RandomBagReplacement.AfterEachPick);
bag.Add("rare item", 100);
bag.Add("uncommon item", 300);
bag.Add("common item", 600);
QualitativeSimulation simulation = new QualitativeSimulation(bag);
QualitativeSimulationResults results = simulation.Simulate(_numberOfSimulations);
The above examples serves as a simplified loot table for killing an NPC in a video game. Upon dying, the NPC will drop one item. To determine what item the NPC drops, we have created a theoretical bag containing all possible items. We then added a specific number of each item to the bag, and upon dying, we will reach into the bag and pull out one of these items to give to the player. Imagery aside, what this actually does is generate a simulation identical to the first example in this section, but leaves the hardwork of computing the probabilities to the computer. Additionally, it allows for shifting of probabilities over time (i.e. dependent events) if the "WhenEmpty" or "Never" replacement options are specified. Another simple example of how this could be used is to pick names out of a hat, where every name is added to the hat one time and the simulation results contain the order in which the names were chosen.
Although qualitative simulations can stand on their own, they can be used as parameters for standard simulations when wrapped inside a qualitative interpretation parameter. The jist of this is that we can provide an "interpretation dictionary" to decide how to transform a string back into a number used in the evaluation of our expression. A practical example might be creating a simulation where we will make a "go/no go" decision and then multiple the remainder of the expression by 1 for "go" or 0 for no "go" to compute only the outcomes where we chose to proceed.
This is a very specific type of simulation designed to produce a node tree encompassing n periods based on the volatility parameter and up probability provided. By itself, this simulation is not particularly useful, but it served as the foundation for me to create the option pricing version of the simulation.
As this is a specialized finance function, I will not spend too much time on the details. Essentially, this type of simulation is used to compute the price of a call or put option using the binomial options pricing model. I have provided a class for specifying the details of the option to be valued. I am reasonably confident in the basic implementation, but do not advise using this for the purpose of making actual investments. If that is your goal, I would advise hiring me full-time and I will be happy to provide a more robust version of this pricing model, among many other valuable contributions.
- Conditional Parameter
- Constant Parameter
- Dependent Simulation Parameter
- Discrete Parameter
- Distribution Parameter
- Distribution Function Parameter
- Precomputed Parameter
- Qualitative Interpretation Parameter
- Random Bag Parameter
- Simulation Parameter
A parameter whose value will take the value of the first ConditionalOutcome that evalutes as true, or the default value if none of the outcomes evaluate to true. For example, we could create a conditional parameter that assigns an interest rate based on a customer's credit score. Customers under 600 get one rate, under 700 a slightly better rate, over 700 the best rate, etc.. The interest rate returned by the parameter would be fed into the evaluation of the equation to determine the expected value based on the customers loan size. We could then use distribution parameters to generate a number of customers with varying credit scores and loan sizes in order to estimate the total profit generated from our loan business.
This is a parameter whose value never changes. While not particularly useful by itself (you can just add a constant in the simulation equation) it is used for optimization in certain situations. And it also allows you to replace a hardcoded number with a more meaningful variable name.
A parameter whose values are determined by running an inner simulation the same number of times as the outer simulation. The total number of inner simulations will be equal to numberOfSimulations ^ (#innerSimulations - 1) if you return the results of the simulation. If you choose to return a summary statistics instead, the number of inner simulations will be equal to (numberOfSimulations * innerSimulationRunCount) ^ (#innerSimulations - 1).
A parameter where there are a predetermined number of possible outcomes whose probabilities sum to 100%. This would be used for something like flipping a coin or rolling a die, where there are a finite number of possibilities for which the probabilities are known. While rolling dice is probably the most obvious use, this parameter could also be used to simulate a decision. For example, you could assign 0 a probability of 70% and 1 a probability of 30% and then multiple an expected cash outflow by the discreate parameter to simulate a binary decision of whether or not to sell a piece of equipment.
The full list of available distributions can be found here. All continuous and discrete distrbutions are supported, but multivariate distributions may not work as expected. Each simulation will randomly choose 1 value from the specified distribution (with then liklihood of a value being chosen based on the distribution's properties).
A parameter whose value is determined by computing a CDF or PDF function using the location in the distribution generated by the location parameter. This is a slightly more advanced version of the above parameter. Instead of just generating a value from the distribution, it will instead return the CumulativeDistribution, Density, DensityLn, InverseCumulativeDistribution, Probability, or ProbabilityLn function corresponding to a simulated value. The simulated value is generated from another parameter (or simulation).
A parameter whose values have already been computed for every simulation. A simulation using a PrecomputedParameter must run exactly as many simulations as there are PrecomputedValues or it will result in an error.
This is similar to the constant parameter, but requires an exact number of values that do not have to be the same. This is parameter is useful for two things. The first is when there is a set of known values for one of the variables in an equation, but the remainder of the variables need to be generated using one of the other parameters. The second is to perform sensitivity analysis where the precomputed parameter(s) will be held constant for a given number of simulations in order to generate a range of outcomes for every option included in the precomputed parameter. This process is described in a little more detail in the Sensisity Simulation section.
A parameter whose value will be determined by looking up a qualitative value in its InterpretationDictionary. If there are no keys that match the qualitative outcome, the DefaultValue will be return instead.
This parameter serves as a bridge between qualitative and quantitative simulations. For example, you could use a qualitative simulation to pick the name of an employee to hire and then use the interpretation parameter to look up the salary for the employee.
A parameter that conceptually resembles a bag containing objects that correspond to all possible outcomes. Each outcome's probability is represented by the proportionate number objects in the bag. One way to conceptualize this is to imagine putting money in a hat. You put 100 $1 bills, 50 $5 bills, 25 $10 bills, 10 $20 bills, and 1 $100 bill. This parameter will then pick one of those bills from the hat for each simulation that is performed. You could also use this parameter as a shortcut for creating a discrete parameter if you don't want to calculate the probabilities of each outcome yourself.
A parameter whose values are determined by running an inner simulation the same number of times as the outer simulation. The total number of inner simulations will be equal to numberOfSimulations ^ (#innerSimulations - 1) if you return the results of the simulation. If you choose to return a summary statistics instead, the number of inner simulations will be equal to (numberOfSimulations * innerSimulationRunCount) ^ (#innerSimulations - 1).
Basically, this is a wrapper for running a simulation inside of another simulation. This can be used to simplify long equations by breaking them into smaller parts, or to simulate a decision making process which another simulation relies upon to determine how to proceed.
- Qualitative Parameter
- Qualitative Conditional Parameter
- Qualitative Random Bag Parameter
A parameter where there are a predetermined number of possible outcomes whose probabilities sum to 100%. For example, if you wanted to flip a coin, you would create two QualitativeOutcomes called "H" and "T" that both have a 50% chance of occuring. The qualitative parameter would then be used to simulate a coin flip each time a simulation is run. The result of the coin flip could be the desired result of the simulation, or it could be used as part of an algebraic equation or decision making process in a larger simulation.
A parameter whose value will take the value of the first QualitativeConditionalOutcome that evalutes as true, or the default value if none of the outcomes evaluate to true. This is probably best illustrated with an example, so let's pretend we are undecided about what restaurant to go to for dinner. We are going to roll a die to decide. If the die lands on 1 or 2, we go to McDonald's. 3, 4, or 5 and we go to Applebee's. 6 and we go to a fancy sushi restaurant.
The QualitativeConditionalOutcome is a singular interpretation of an outcome. We create an outcome that says if the roll is less than 3, we go to McDonald's. We create two more rules to satisfy the other situations mentioned above. The Qualitative Conditional Parameter will then look at every rule we have created and decide which one takes precedence in the situation. When used as part of a simulation, it will reperform the entire roll and evaluate process for each iteration in order to produce the desired number of restaurant choices.
A parameter that conceptually resembles a bag containing objects that correspond to all possible outcomes. Each outcome's probability is represented by the proportionate number objects in the bag.
The slightly annoying thing about the Qualitative Parameter described above is that you must ensure you provide probabilities that sum to 100%. This parameter is essentially a shortcut for creating the first parameter. You simply decide how many of each item (outcome) should be include in the "bag" and the code does the hardwork of calculating the probability of each item being selected. Thematically, each simulation is essentially picking an item at random from the hat. However, the one distinguishing factor from the basic Qualitative Parameter is that you can specify a replacement strategy of always, when empty, or never, which allows for the probabilities of each outcome to change on subsequent simulations, if desired.
Before moving on to probability, I thought it would be good to include an in-depth look at defining a complex simulation and making it easy to use via templates. The code below is taken from Simulations\Templates\Quantitative\Financial.
public static Simulation BlackScholes(OptionContractType optionContractType,
IParameter priceOfUnderlyingAsset,
IParameter strikePrice,
IParameter timeToMaturity,
IParameter volatilityOfReturns,
IParameter riskFreeRate)
{
// rename parameters to work with shortened version of equation
priceOfUnderlyingAsset.Name = "St";
strikePrice.Name = "K";
timeToMaturity.Name = "t";
volatilityOfReturns.Name = "o";
riskFreeRate.Name = "r";
// create the pieces needed to form either expression
string d1 = "(1 / o / sqrt(t) * (ln(St / K) + (r + o^2 / 2) * t))";
string d2 = "((1 / o / sqrt(t) * (ln(St / K) + (r + o^2 / 2) * t)) - o * sqrt(t))";
string PV_K = "(K * exp(-r * t))";
string a1 = "0.254829592";
string a2 = "-0.284496736";
string a3 = "1.421413741";
string a4 = "-1.453152027";
string a5 = "1.061405429";
string p = "0.3275911";
string x1 = $"(abs({d1}) / sqrt(2))";
string x2 = $"(abs({d2}) / sqrt(2))";
string t1 = $"(1 / (1 + {p} * {x1}))";
string t2 = $"(1 / (1 + {p} * {x2}))";
string sign = optionContractType == OptionContractType.Call ? "+" : "-";
string N1 = $"(0.5 * (1 {sign} (1 - ((((({a5}*{t1} + {a4})*{t1}) + {a3})*{t1} + {a2})*{t1} + {a1})*{t1}*exp(-{x1}*{x1}))))";
string N2 = $"(0.5 * (1 {sign} (1 - ((((({a5}*{t2} + {a4})*{t2}) + {a3})*{t2} + {a2})*{t2} + {a1})*{t2}*exp(-{x2}*{x2}))))";
// create simulation based on type of option
if (optionContractType == OptionContractType.Call)
{
string expression = $"{N1} * St - {N2} * {PV_K}";
return new Simulation(expression,
priceOfUnderlyingAsset,
strikePrice,
timeToMaturity,
volatilityOfReturns,
riskFreeRate);
}
else
{
string expression = $"{N2} * {PV_K} - {N1} * St";
return new Simulation(expression,
priceOfUnderlyingAsset,
strikePrice,
timeToMaturity,
volatilityOfReturns,
riskFreeRate);
}
}
Before diving into it, I would like to give credit to John D. Cook's Stack Overflow post for helping me figure out how to calculate the Cumulative Normal Distribution Function the hard way.
On a standalone basis, I will admit that the above simulation was more work than is necessary to compute option prices using the Black-Scholes model. In the same file, I have included a simpler function that computes the option price without creating a simulation. However, I believe the extra effort is worth it in order to take advantage of the features included with Simulations.
First, let's address the expression. This expression is easily the longest and most involved one I have written. Normally, I would have broken something like this into multiple simulations and linked them together, but that was not possible in this case because the same variables are being reused in multiple sub-expressions, so the values must stay consistent throughout each simulation. That being said, it is still possible to do something very similar by writing chunks of longer expression and then concatenating the strings together.
Everything from "string a1" to "string N2" is a neat way to calculate the CDF of the normal distribution using basic mathematical operations. This was necessary because MathNet.Symbolics does not have a built-in way to do this in an expression. However, I do give it props for being able to handle abs, exp, ln, sqrt, etc... and keep up with the plethora of parentheses and operations. This simulation alone gives me great confidence in the ability of the library to handle complex algebraic expressions.
The beauty of the template is that all this code is shielded from the user, and they simply have to provide six parameters in order to generate a copy of this simulation. The advantages the simulation provides over the function is that it allows for scenario and sensitivity analysis using any of the parameters discussed prior to this section. For example, you can input an estimate or range of implied volatilities and calculate the option price in each scenario and therefore an average option price. You could also take advantage of sensitivity analysis by providing a set of precomputed strike prices. You could then calculate the price of the option at every strike price and compare that to the market prices of different options to identify anything that is under or overpriced.
The advantage of the simulation is two-fold. First, you do not need to know all of the inputs precisely in order to get an understanding of the value of an option. Second, you can quickly perform sensitivity analysis on any combination of parameters without having to deal with any complicated set-up. This goes back to the vision of the simulation library -- as long as a problem can be defined by an algebraic expression, then it can be turned into a simulation and analyzed with relative ease.
The probability library is a recent addition to address those situations where a more precise analysis of the expected outcome is needed. It is currently broken in Counting and a Probability classes, which are detailed below. The ultimate vision for this project would be to add one more module called "Optimizations" that will work in sync with Simulations to determine how to achieve the optimal or desired outcome. Together, Probability, Simulations, and Optimizations should provide all the tools necessary to make good decision in almost any situation.
- Factorial
- NumberOfSequences
- NumberOfCollections
- SplitCollections
This is a simple recursive function to calculate the factorial of any number (n!). It is not a standard function, but is needed for some of the other counting functions.
This function is used to calculate the number of sequences of k objects that can be chosen from a set of n objects. When calculating the number of sequences, order matters. There are two formulas used depending on the situation:
n^k
This is used when repetition is allowed. For example, when creating a four letter word where any given letter can appear multiple times.
n! / (n - k)!
This is used when repetition is not allowed. For example, when creating a four letter word where each letter can only appear once.
This function is used to calculate the number of collections of k objects that can be chosen from a set of n objects. When calculating the number of collections, order does not matter. There are two formulas used depending on the situation:
(n + k - 1)! / (k! * (n - 1)!)
This is used when repetition is allowed. For example, when choosing four letters, without regard for order, where any given letter can be chosen multiple times.
n! / ((n - k)! * k!)
This is used when repetition is not allowed. For example, when choosing four letters, without regard for order, where any given letter can only be chosen once.
For those that are interested, the formula for repetition is actually the same formula used for no repetition, with slight modifications made to n and k:
n = n + k - 1
k = n - 1
This function is used to split a set of items into smaller collections. For example, if you had a group of 10 friends that wanted to play basketball, you could use the falling formula to decide how to split them into two teams of 5:
5! * 5!
While this is simple enough to remember, the function provides the convenience of being able to do this for an arbitrary number of subcollections with relative ease. The generic version of the formula looks like this:
PRODUCT((sizeOfCollection_1)! ... (sizeOfCollection_n)!)
I added a Product function to make this easier in C#. The Linq Aggregate function serves a similar purposes, but does not work for longs. However, seeing as the numbers calculated by factorials can get very large very fast, I felt is was necessary to create a version of the function that could work with the long data type.
- BayesTheorem
- BernoulliTrials
- RandomWalk
- ExpectedValue
- WinningPercentage
- Variance
- StandardDeviation
- ZScore
- ZScoreToProbability
- BoundedZScoreProbability
- ProbabilityToZScore
The theorem is as follows:
P(A | B) * P(B) = P(B | A) * P(A)
P(A) = probability of event a
P(B) = probability of event b
P(A | B) = probability of event a assuming b occurs
P(B | A) = probability of event b assuming a occurs
This function allows you to provide any three of the above values and will calculate the remaining value.
Calculates the probability of exactly k events occurring in n trials when each trial has exactly two outcomes. The code is a little hard to follow because I used recursion to provide a few different variations on the calculation, but the basic formula is:
(n choose k) * p^k * (1 - p)^(n - k)
While the above formula can only be used to calculate the probability of an exact number of occurences, it is possible to iterate through all possible k values until either 0 or n is reached in order to calculate the probability of less than or more than a certain number of outcomes occurring. This is what the BernoulliTrialOption enum and recursion is used for.
Calculates the probability of reaching the ending value on a random walk assuming the change in value is 1 each time. The change in value of 1 is a simplifying assumption, but not a limiting one, as 1 can represent any value so long as the starting and ending values are adjusted accordingly. The formula used here is:
s = (1 - probabilityOfWinning) / probabilityOfWinning
probability = (s^startingValue - 1) / (s^endingValue - 1)
The implication is that the further you have to go, the less likely you are to get there. Or said another way, the more trials that are performed, the more likely the outcomes will be distributed in line with the expected probability.
A more illustrative example would be playing blackjack at a casino. If you start with $1000 and plan to stop playing if you reach $0 or $2000, you are actually better off betting $1000 on your first turn. The more rounds you play (when you make smaller bets), the greater the likihood of you reaching $0 before you reach $2000.
Calculates the expected value of a set of outcomes based on each outcome's probability and payoff.
SUM(probability * payoff)
Calculates the changes of winning based on the possible scenarios and their specific win rates.
SUM(probabilityOfScenarioOccurring * probabilityOfWinningInThatScenario)
These are the traditional calculations with constructors that allow for a game or set of outcomes to be analyzed. Essentially, they handle calculating the expected value before calculating the variance of standard deviation. These functions are in turn leveraged by the Game object to quickly compute the probability of a given outcome occurring over any number of iterations.
(targetValue - expectedValue) / standardDeviation
This uses the cumulative distribution function of the normal distribution to convert a z-score into a probability.
This function provides a quick way to calculate the probability of the outcome falling in or outside of a range of z-scores.
This uses the inverse cumulative distribution function of the normal distribution to convert a probability into a z-score.
I have intentionally chosen not to specify a license. If you wish to make use of this code, please reach out to me at tim.sullivan25@outlook.com to discuss the purpose of your project.