- 2016 Update: weighted adjusted times are still used to provide an additional data point for all teams but rankings will now show unadjusted race times so users can see realistic times for each racecourse.
The purpose of this methodology is to allow teams to be ranked across different venues. Because of the points system involved, teams that are consistent across multiple regattas will rank higher than teams that race few times a season.
As the MOFOS have pointed out in their ranking methodology for regattas in Eastern Canada, there are many factors that affect a team’s time from one regatta to the next. Some of those are wind conditions, currents, water depth, slight differences in course length or brand of boats used. In the MOFOS ranking model they chose teams in common to determine how times at two different regattas compare to each other. Some problems that came up using their methodology in the US include regattas where too few teams overlap, and teams that change their names between regattas and cannot be automatically linked. There is further explanation of how the methodologies differ in the detailed explanation below.
This ranking uses average race times of two regattas to adjust individual race times for a one-to-one comparison, using Mercer as the venue of comparison. For example if the average time at Mercer was 2:41 and at Philadelphia International it was 2:51, that means we adjust all Philly times by a factor of .94 (Mercer average / Philly average). We can then compare Mercer and adjusted Philly times directly. Of course some regattas are more competitive than others and that will affect the regatta’s average time. This is taken into account by assigning weights to all regattas before calculating the final adjusted rankings.
First we choose a base regatta (Mercer, NJ). An average is then calculated for each team at Mercer. For example Penn Dragon Boat averaged 2:16.5 at Mercer (raced twice – 2:16, 2:17).
The same is done for a target regatta, for example Philadelphia International (Philly). Here Penn Dragon Boat averaged 2:11.3. We can’t yet use 2:11.3 in their ranking because we need to adjust their Philly time to make it comparable to their Mercer time.
We started off with the following adjustment:
To calculate this adjustment we average all teams that attended Mercer and average all teams that attended Philadelphia International. We calculated the implied adjustment like so:
Average Mercer time = 2:37.4
Average Philly time = 2:46
Adjustment factor ( Mercer / Philly ) = 0.9483
It’s been pointed out that one of the issues with using the average time of all teams that raced at a regatta is the regatta’s spread of talent. Averages for races tend to be skewed towards the slower teams, who tend to be significantly farther from the regatta mean than fast teams. While that’s usually not a problem, it can be an issue when comparing regattas with different distributions of talent.
We can see from here that the team spread at the Long Beach Spring Race is more even than at Mercer. If we were to use the total race average for Long Beach Spring we’d be over adjusting.
To minimize the effect caused by that possible difference, we take the average time of the top 25% of teams who attended and calculate the adjustment like so:
Average of top 25% Mercer time = 2:18.1
Average of top 25% Philly time = 2:25.4
Adjustment factor ( Mercer / Philly ) = 0.9499
Where before Penn Dragon Boat’s adjusted Philly time would be
2:11.3*0.9483 = 2:04.5
it is now
2:11.3*0.9499 = 2:04.8
We perform the same calculation for all teams that attended Philadelphia International.
This adjustment is performed for all target regattas. For teams that attended more than one venue we average all their available times to create a preliminary ranking time. Since all regattas are adjusted to Mercer, we can think of this ranking time as how that team would perform at Mercer.
When using a simple average to calculate a team’s rank time, a less competitive regatta will affect their score the same as a more competitive regatta. Less competitive regattas might also have a slower average time, which would over-adjust the times in that regatta. Every regatta therefore is assigned a weight, so that a less competitive regatta will affect a team’s ranking by less than a competitive regatta.
The weight is calculated in the same way as in the MOFOS methodology with one caveat – weights are not clamped:
“The attendance of any of the top 50 teams award competitive points to a regatta. Team #1 gives 50^2 points. Team #2 gives 49^2 points, Team #3 gives 48^2 points, etc. The sum of competitive points for a regatta is then divided by the [sum of competitive points] for [Mercer]. This number is shown next to the regatta name in the rankings.”
Instead of using a straight average, a team that has attended Mercer which has a weight of 1.00 and Washington DC which has a weight of 0.62 will have a final ranking time of:
(1/1.62*Mercer_time) + (.62/1.62*WashDC_time)
Why use race average as the conversion factor and not a regression-dictated adjustment?
The objective of a ranking system is to be able to compare teams across different venues and racing conditions. To do that, you start off using one regatta as your ‘base’, to which you compare all other venues. We next want to take all the teams at another regatta and say “what would have happened if on that very same day they competed at our ‘base’ venue”, thus eliminating any systemic differences between the two regattas.
While we began by using teams that competed at both the “base” and target regattas to create a regression model, problems arose. A cubic polynomial regression was the line of best fit across multiple “base” and target comparisons, but could produce some wild results. MOFOS in their model use a quadratic polynomial regression, but with few common teams between many regattas that method could determine that a 3:01.5 team at Boston (the average 2014 Boston time) was a 3:54 team at Mercer, which was too large of a jump. Linear regressions would also occasionally output adjusted results that slowed down fast teams but speeded up slow teams, which is counterintuitive if you consider adjustments from the angle of a team’s fitness (where teams that fatigue less quickly should, over distance, slow down less than teams that fatigue more quickly).
Using a single adjustment factor for each venue effectively reduces systemic factors (eg the course length differs slightly by venue, some teams get slow lanes) but doesn’t adjust for non-systemic factors like having a better steersperson or stacking your boat with ringers. It does allow for comparisons between regattas that have few or none overlapping teams, as well as maintaining ranking for teams within that regatta (ie the 25th and 26th fastest teams at a regatta won’t switch ranks after their times are adjusted, unless they participated at other regattas that indicate they should switch ranks).
Despite the time adjustments and regatta weighting we still only have an approximation for comparing teams across races. Teams should be rewarded not only for a good race time, but also for placing highly in a regatta even if difficult race conditions throw off the time adjustment (looking at you Flushing, NY). To get around those issues a points system has been introduced.
For each race attended, a team will receive 0 <= x <= 1 points for their performance. So for a 30-team regatta, the fastest time will get 30/30 points, the second fastest time will get 29/30 points, and so on. A team’s overall position by weighted adjusted time (WAT) also receives 0 <= x <= 1 points. This ensures all teams at least 2 data points we can use. Final rankings are calculated via cumulative points.
This methodology produces a reasonable method of ranking teams across venues and over time.
TL;DR you should aim to place highly at regattas with lots of teams and race more than once a season
|Data and information used for these dragon boat rankings are collected from the USDBF, ERDBA, PDBA, SRDBA, and ADBA websites, plus regatta websites from around the United States|