Random Variables & Probability Distributions
Why Variation Appears
- Almost every physical or engineering experiment shows random variation in its outcomes. :contentReference[oaicite:0]{index=0}
- Measurements rarely repeat exactly because of hidden environmental factors, instrument noise, or truly random processes. :contentReference[oaicite:1]{index=1}
- Such experiments are called random experiments; classic examples include rolling dice or flipping coins. :contentReference[oaicite:2]{index=2}
Probability vs Statistics
- Probability starts with a known model of the process and deduces the likelihood of future events. :contentReference[oaicite:3]{index=3}
- Developed historically to analyse games of chance and to manage risk in engineering design. :contentReference[oaicite:4]{index=4}
- Statistics begins with observed data and infers an underlying probability model, then uses that model to make decisions or predictions. :contentReference[oaicite:5]{index=5}
- Bayesian statistics refines an initial (prior) probability with new evidence to obtain an updated (posterior) distribution. :contentReference[oaicite:6]{index=6}
Random Variables
- A random variable assigns a real-number value to each outcome of a random experiment. :contentReference[oaicite:7]{index=7}
- Discrete: takes countable values (e.g., heads).
- Continuous: takes any value in an interval (e.g., temperature).
- Each possible value comes with a probability; together they form a probability distribution. :contentReference[oaicite:8]{index=8}
Uncertainty in Experiments
- Random errors arise from unpredictable fluctuations (noise, drift); systematic errors shift all readings the same way. :contentReference[oaicite:9]{index=9}
- Always report measurements as value uncertainty and propagate uncertainties when combining data. :contentReference[oaicite:10]{index=10}
Probability Basics
- Axioms: non-negativity, total probability , and additivity for mutually exclusive events.
- Independence means ; misunderstanding independence leads to the gambler’s fallacy. :contentReference[oaicite:11]{index=11}
Bernoulli’s Law of Large Numbers
- In independent, identical trials, the relative frequency of an event converges to its true probability as . :contentReference[oaicite:12]{index=12}
- Demonstrates why long-run averages stabilise and validates frequentist probability.
Probability Mass & Density Functions
- Probability Mass Function (PMF) for discrete .
- Probability Density Function (PDF) satisfies for continuous . :contentReference[oaicite:13]{index=13}
Cumulative Distribution Function
- CDF is non-decreasing, right-continuous, and satisfies , . :contentReference[oaicite:14]{index=14}
- For a discrete variable, jumps at each possible value; for a continuous variable, is smooth, and . :contentReference[oaicite:15]{index=15}
Formula Sheet & When to Use
- Relative-frequency estimate of : \hat p_n=\dfrac{\text{# successes}}{n}
- Apply in Monte Carlo or large-sample experiments (Bernoulli’s law).
- PMF (discrete):
- Use to list probabilities for each outcome of finite/countable .
- PDF (continuous): ,
- Integrate to find probabilities on intervals.
- CDF (general):
- Differentiate to get (if continuous) or difference to get (if discrete).
- Bernoulli law bound: with high confidence for sufficiently large .
Tip: Use box plots and histograms to visualise empirical distributions before fitting theoretical models.