Saral Shiksha Yojna
Courses/Behavioral Research: Statistical Methods

Behavioral Research: Statistical Methods

CG3.402
Vinoo AlluriMonsoon 2025-264 credits

Memory Triggers

Tiny cues. They reconstruct big topics.

NOIR
Nominal → Ordinal → Interval → Ratio — increasing in info.
LINeM
Linearity, Independence, Normality, Equal variance, no Multicollinearity (OLS assumptions).
Posterior ∝ Prior × Likelihood
Bayes update mantra. P(D) just normalises.
BF₀₁ > 10
Strong evidence FOR the null — something p cannot deliver.
Power = 1 − β
Factors: n, effect size, α, variance, design.
d = 0.2 / 0.5 / 0.8
Cohen's d benchmarks: small / medium / large.
η² = 0.01 / 0.06 / 0.14
ANOVA effect size benchmarks: small / medium / large.
F = MS_between / MS_within
ANOVA core. F < 1 → no effect; F ≫ 1 → effect.
df = (r−1)(c−1)
χ² independence.
VIF > 5–10
Severe multicollinearity — drop / combine / ridge.
R² = r²
For SIMPLE regression only.
OR = exp(β)
Logistic regression coefficient interpretation.
SEM = σ/√n
Standard error of the mean shrinks as √n.
1.96 / 2.58
Normal z-critical for 95% / 99%.
FWER vs FDR
FWER = P(any FP). FDR = expected FP/rejections proportion.
Bonferroni vs Holm vs BH
FWER simple / FWER stepwise / FDR stepwise.
Sensitivity vs PPV
P(+|D) vs P(D|+) — Bayes-related, not equal.
Bessel's correction
Divide by n − 1 in sample variance. One DoF spent on x̄.
Sphericity violation → GG
Greenhouse-Geisser correction multiplies df by ε.
Always plot raw data
Anscombe's quartet manifesto.
PCA vs FA
PCA: variance, no error term. FA: latent constructs, error term.
EFA → CFA
Discover structure, then test on held-out data.