Experimental Probability: Essential Tools to Decision-Making!

experimental probabilityWe inherently use probability in our daily lives even without realizing it; from weather forecasts, stock analysis, predicting election results or the next soccer champion, the probability theory encompasses us all. Experimental probability uses the relative frequency approach to estimate the occurrence of an event based on the past records. For example, before the vehicle insurance company issues your car policy, they would study your history of claims settled, evaluate the kind of driver you are etc. to find out the probability of you being involved in an accident. This course shows you how they use probability to calculate the chances.

Experimental probability is used in most real-life situations when the probabilities cannot be determined theoretically. Most times we need past records, results from trials or experiments or statistical data to derive the probability of an event. In such cases, experimental or empirical probability comes in play. We will discuss the various concepts of experimental probability, terms and rules etc. to truly understand its importance in our lives. If you are a manager, stock trader, economist, statistician, businessman or manager, then you would find the subject of probability not only interesting but also useful to solve many of your decision-making problems.

What is Experimental Probability?

Simply put, Experimental Probability is the approach to finding the probability of an event based on the relative frequency of its occurrence in the past. Say for example, you roll a die 100 times and 18 of those times, the number 3 came up. Then we will say that the experimental probability of a 3 coming up in a roll of die, based on this experiment, is 18/100 i.e. 0.18 or 18%. If you want to learn the concepts of experimental probability theory, skip right ahead to this course.

While theoretical probability or the classical approach is based on mathematical formulae, experimental probability is calculated when the event of problem is performed as an experiment. One has to conduct a large number of trials and analyse the results of the trials or experiments, study the data collected by means of tables and charts to arrive at the probability.

Example

Let’s take a simple case of tossing the coin and rolling a die simultaneously. We need to find the joint probability of getting a heads on the coin toss and getting a 6 in the roll of the die.

So, the theoretical approach would be P (A, B) = H/ (H, T) x ‘6’/ (6 outcomes), which translates into 1 / 2 x 1 / 6 = 1 / 12 i.e. 0.083 or 8.3%

However, the Experimental Probability approach would be to toss the coin several times, roll the die an equal number of times and record the data in a tabular form. Normally, a minimum of 25 trials are conducted in the experimental probability. Once you have recorded the results, you need to see how many times did the event of ‘heads up’ AND a ‘6 on the die’ happen together. See below data for easy understanding. Here C = tossing coin, D = roll of die and E = Event of getting ‘Heads’ and a ‘6 on the die’ happening together. No. of trials are 25:

C = { T, H, T, T, H, T, T, T, T, H,T, T, H, T, H, H, T, T, T, H, H, T, T, T, H } D = { 4, 4 , 2, 1, 5,  6, 3, 2, 1,  6, 5,  6, 4, 3,  1,  2, 5, 2,  1, 6,  3, 1, 4,  3, 5 } E =  —————————–Yes—————————–Yes——————————

You can clearly see the results of the experiment conducted. The event that a head came up in the toss of the coin AND a 6 on the die, happened together TWICE. Therefore the Experimental Probability of P (A, B) = 2 out of 25 times i.e. 2/25 = 0.08 or 8%, which is similar to the theoretical probability of the same event.

While, in this experiment, the result is same as the theoretical approach, most often they are not equal and might just be relatively close. But this is acceptable because not all experiments yield the same results; they depend on how well the trials are conducted and the accuracy of the calculations. This is a simple problem to explain the concept of experimental probability. However, in real life situations, data is collected from a huge number of samples to arrive at conclusions. You may want to build a foundation to better understand probability by taking this course.

What is Relative Frequency?

The relative frequency is nothing but the ratio of the number of times a desired result happens during the course of an experiment to and the total number of trials conducted.

In our above example, you can see from the table that the No. of times the event E occurred is 2 and the total number of times the trials were conducted is 25. So, Relative Frequency of event E = 2/25 = 0.08 or 8%. In other words, relative frequency is nothing but the experimental probability approach itself.

Relative Frequency is not a theoretical quantity, but an experimental one based on the experiment results. Therefore, it’s possible to get a different relative frequency every time the experiment is conducted.

For the experimental probability to be closer to the theoretical probability, it is imperative to conduct large number of trials in statistical experiments. More the trials run in an experiment, closer the observed relative frequency to the calculated theoretical probability of the event. For a detailed knowledge on relative frequency and probability methods take this workshop on  Probability and Statistical.

Why Experimental Probability?

While some events carry enough information on them to calculate theoretical probabilities, very often statisticians and analysts need to conduct experimental trials to deduce the probabilities of real-life events, due to lack of information. For example, in basketball tournament, you don’t know how a particular team will perform or how many goals they would score against their opponents. In such a case, you’ll have to study the historical records of the players and teams, their past performances and thereby determine the relative frequency of a positive outcome. Theoretical probability cannot be determined in such kid of scenarios and this is where the importance of conducting statistical experiments and the approach of Experimental Probability comes in.

Elements of Experimental Probability

  • Random Variables

The values of possible outcomes in an experiment vary and are determined purely by chance. Such variables are called Random Variables. In our coin tossing example, the value of H and T depends on the number of times heads appeared and number of times tail was up. Similarly, if denote the event of 6 coming up in the rolling of a die by X, then X could take the values anywhere between the number 0 and 25 because we rolled the die 25 times. Since, the value of X is determined by chance, it’s called a random variable.

Random variables and the probability of the values that these variables can have is of utmost interest to people and researchers involved in experimental probability theory. Based on the random variables and the values they can take within a sample space of the experiment, probabilities of the different values are determined through various calculations. Learn more about random variables, probability distributions and other theories in our advanced course on probability distributions.

There are two kinds of random variables namely, discrete and continuous. If the variable has an associated probability distribution, then it’s a discrete random variable. Whereas, if the variable has probability density functions, then it’s a continuous random variable. Let’s look at what a probability distribution is to understand the concepts better.

  • Probability Distribution

As you know, the random variables take different values with different probabilities during the experiment and within the specified sample space. If we can analyze the value of the random variable along with its probability, then we can denote the relation between the two in terms of a probability function f(x), where x is the value of the random variable.

Probability Distribution is nothing but a representation of the values of the random variable along with their probabilities and is classified as either discrete or continuous distribution. The preparation of a probability distribution depends on the nature of the random variable and the data available from the trials.

There are three kinds of approaches to constructing a probability distribution:

  1. Equally likely method – by using theoretical probability and reasoning
  2. Relative frequency method – by using experimental probability based on historical records of the experiment outcomes
  3. Judgmental assessment method – based on subjective probability provided by decision makers, analysts etc. This is useful when both theoretical and experimental probability methods fail due to lack of information as well as historical data.

Based on the type of probability function f(x), distributions are either uniform in nature or cumulative. For more insights into the different elements of  Statistical Methods, you can refer to this course.

Probability theories are the crux of statistical analysis and are used in a wide range of applications for research, market & business analysis, medicine, insurance, finance, risk management etc. If you are a businessman or analyst, you’ll surely find experimental probability and other methods essential to solving the most complex of decision-making problems!