accepted

ShotMap marker size scales by area, not linear radius

Marker radius is computed from xG via `r = √(minArea + t × (maxArea − minArea))`, not linearly. A 0.5 xG shot reads as ~71% the radius of a 1.0 xG shot, not 50% — which preserves perceptual equivalence because human vision reads circle area, not radius.

ShotMap visual-encodingdefault-behaviourdata-integrity

Context

Early ShotMap implementations scaled radius linearly against xG (r = 0.6 + xg × 2.2). That violates the Tufte convention: human vision reads area, not radius, so a 0.5 xG circle appeared disproportionately big next to a 1.0 xG circle. Consumers comparing shots across a map were implicitly mis-reading the encoding, especially in editorial contexts where xG is the primary claim.

Decision

Radius is derived from a √-transformed interpolant between preset bounds: r = √(minArea + normalisedXg × (maxArea − minArea)). Opta preset uses a 0.7–2.1 radius band; StatsBomb uses 0.9–2.5. Legend samples and scale-bar entries use the same transform as the plot markers.

Consequences

  • Golden SVG fixtures regenerated when the transform changed; diff is visible on every low-xG shot.
  • Consumers who want a different encoding pin markers.size to a constant or supply their own size callback — the default transform still applies when neither is set.
  • Null / missing xG falls back to the neutral mid-range radius rather than collapsing to minArea, so sparse data still renders markers at a readable size.
  • Presets own their own bounds; consumers combining Opta and StatsBomb shots in one chart must pick one preset or override markers.size manually.
← All decisions