In January 2026, a competitive age-group program with 34 swimmers aged 11โ17 ran a structured 30-day reaction time experiment using SwimBip. This is what happened.
The Setup
The coach โ a former NCAA Division I swimmer โ introduced 15 minutes of reaction time work at the beginning of three practices per week. All sessions used SwimBip's reaction training mode with interval set to 5 seconds.
Baseline reaction times were recorded in week 1 using the pool's electronic timing system. Final measurements were taken in week 4 at a sanctioned intrasquad meet.
The Results
The distribution was not uniform:
- Top quartile improvement: 0.13โ0.18 seconds (swimmers with worst baselines)
- Middle quartile improvement: 0.06โ0.09 seconds
- Bottom quartile improvement: 0.01โ0.03 seconds (already near-elite baselines)
Unexpected Finding: Consistency
Beyond average improvement, the standard deviation of reaction times dropped significantly โ from 0.089 seconds in week 1 to 0.041 seconds in week 4. In practice, this means swimmers became not just faster but more predictably fast. Their good starts and their bad starts converged toward the same number.
For competition, consistency is arguably more valuable than raw speed. A swimmer who always starts at 0.66 seconds outperforms one who alternates between 0.58 and 0.74.
Coach's Commentary
"The thing that surprised me most was how quickly it became self-reinforcing. By week 3, swimmers were asking to do more reaction work. When you can show kids a number that improves week over week, they get hooked on the data."
What We'd Do Differently
- Start with individual baseline sessions rather than team measurement to reduce social anxiety around early numbers
- Add a dry-land component in weeks 1โ2 before moving to pool starts
- Build in a mid-point measurement at week 2 for motivation maintenance