Friday, August 7, 2015

Pre-Season OR 6A Girls Rankings

Continuing the look at my Oregon Rankings, here are the 6A Girls Pre-Season Rankings. Unlike Washington, I can’t include more than one or two incoming freshmen because it’s much less clear where those athletes are headed, and for the girls rankings that is a big deal (freshmen impacting the girls varsity team isn’t very uncommon at all, unlike the boys where strong teams generally don’t rely on freshmen to score points).

With that limitation in mind, take these lists with an even bigger grain of salt than usual.

 

6A Girls Top 10 Teams:

1.       Jesuit (#4 in 2014 with 112 points)

2.       Sunset (#2 in 2014 with 66 points)

3.       St. Mary’s Academy (#3 in 2014 with 98 points)

4.       South Eugene (#1 in 2014 with 66 points)

5.       Grant (#5 in 2014 with 169 points)

6.       Aloha (#6 in 2014 with 184 points)

7.       Sheldon (#7 in 2014 with 248 points)

8.       Clackamas (#9 in 2014 with 300 points)

9.       Southridge (DNQ in 2014)

10.   Franklin (#15 in 2014 with 340 points)

 

6A Top 15 Individuals:

1.       Ella Donaghu, Grant

2.       Nicole Griffiths, Sunset

3.       Brooke Chuhlantseff, West Salem

4.       Lacey Conner, St. Mary’s Academy

5.       Audrey Huelskamp, Sunset

6.       Alyssa Foote, Aloha

7.       Mahala Norris, Roseburg

8.       Kennedy Allen, David Douglas

9.       Kelly Makin, Sunset

10.   Aimee Piercy, Jesuit

11.   Kim Sanders, Jesuit

12.   Sydney Brieher, Wilson

13.   Rennie Kendrick, Grant

14.   Nicole McCullough, Jesuit

15.   Victoria Haynes, South Medford

 

8 comments:

  1. Agree about the grain of salt. Not sure I get the logic on moving Jesuit up to the top.

    Jesuit loses their top runner and 2 of top 5 from a team that placed 4th at state; fastest returning runner ran a 19:41 and was 26th at state.

    South Eugene loses their top 2 from a state championship team, but returns 3 runners who placed ahead of Jesuit's fastest returning runner at state (and a 4th that was only a second behind)

    Sunset returns their top 6 runners from state, where they placed 2nd (tie-breaker). 3 of those were in the top 11.

    True, 9th graders will have an impact this year. There are at least 2-3 that will place in top 15. But if you aren't factoring that in, Jesuit probably doesn't belong above South and Sunset. Unless, of course, you are using the athletic.net hypothetical meet? In which case you need to adjust for the fact that some Jesuit PRs are from a time trial on the track...actual race performances are a better predictor.

    ReplyDelete
    Replies
    1. tl;dr: summary at the end, but track marks and program history play a role in my pre-season rankings

      I'm not using the Athletic.net hypothetical meet. XC courses vary, and some meets listed as 5k on athletic.net probably aren't actually 5k (e.g. Sandelie seems like it's a bit too fast of a course to be a full 5k). My pre-season rankings, as always, are based primarily on the best performance at 1500m+ on the track OR from a very few select XC meets (State, NXR and NXN in the past, though for a few reasons this year it's only State and their State qualifying meet).

      Sunset vs. Jesuit is very much like the Tahoma vs. North Central situation for the Washington Boys All-Class rankings: it's an issue of teams being similar enough, but one having a MUCH stronger history of success over the last 9 years. If we were only looking at returners, and disregarding any program history, Sunset would be ahead of Jesuit. The 22-team 6A scoring has Sunset over Jesuit 57-86. However, Jesuit has been the best program by a large margin in Oregon over the last 9 years: as I posted a few days ago, their average All-State Power Merge score over that time period has been 99.33 points, versus Sunset's 220.22 or South Eugene's 172.67. That is a HUGE difference. Granted, the last few years haven't been Jesuit's best, but that history remains (they still have the same coach).

      As for returners, it's true that Jesuit was certainly behind at State last year compared to Sunset and South Eugene's returners. It's also true that Sunset had clearly the better returning group at the Metro League Championships last year. However, Jesuit seemed to run MUCH better in track than they did in XC (10:19.06, 10:23.20, 10:27.42, 4:51.88 and 10:50.11). Those marks are clearly better than their 20:00ish times at State or 19:00-19:35 times at the Metro League Champs. They are also much stronger than either South Eugene or Sunset's marks this spring (SE's top two runners ran 10:50ish and 4:50's; Sunset had a strong quartet, but their #5 returner ran 11:13/5:04).

      So, the short answer to your question: Jesuit's stronger history, plus looking very strong in track, is why they supplanted Sunset as #1. Going into the season, barring any advantage with newcomers, South Eugene has some work to do to catch either program.

      Delete
    2. Thanks for the detailed reply. Very interesting to read the rationale. Curious if you have back-tested the model? I am skeptical of the "strong history" component, esp if it is stretched back 9 years. It would be useful if we had no data about returning runners. But in this case we have pretty good data on returners, and that is much more relevant than the performance of a class of runners who are now in college (or potentially grad school). To take Sunset and Jesuit as an example, I think the two factors you mention, improvement in track times + program history, have been true the last few years as well for Jesuit? And this has likely resulted in projecting them higher than they actually finished? Might be fun to look at a projection without history factored in vs. one with history, then back-test in the various classifications. Of course, as you mention, 9th graders are likely to play a role and that is difficult to capture in any model. Thanks for putting together the rankings, very fun to look through!

      Delete
    3. I haven't back-tested the model. However, I have tried to include it (on the national scale) whenever I could in the past, and I don't think it necessarily steered me wrong at that level.

      Keep in mind that, particularly on the girls side, Program History is going to capture on some level factors like how often teams have had impact newcomers as well as a program's history of helping runners improve from year to year (because it is looking at a substantial history, it isn't about how just one group of athletes elevated the program - to be consistently good, you have to have had a constant influx of talent as well as a proven ability to develop that talent). Again, it is by NO means the biggest factor in the rankings (the weighting is 3x Power Merge score, 2x Program History, 1x adjusted Mass Merge score, 1x Score vs. Historical All-State Field). In other words, about 70% of the team's ranking is based on returners and about 30% on program history (which will factor in things like how often they've gotten impact newcomers and their history of improving from year to year, to some degree).

      As for track vs. XC results, yes, you are right that there are sometimes programs that are better at developing track kids than XC kids, and some kids are better at track than XC. However, both sports are similar enough that success in one will usually carry over to success in the other. Just because the Jesuit kids didn't run well at the Metro League XC Championships or State XC Championships, compared to their track times this spring, doesn't mean they WON'T run well at those same meets this fall.

      As far as Jesuit's track results in 2015 vs. 2014 or 2013, here is a look at their non-senior 3k times:

      2015: 10:19.06, 10:23.20, 10:27.42, 10:50.11, 10:54.92
      2014: 10:16.84, 10:46.06, 10:54.25, 10:56.22, 10:58.98
      2013: 10:22.64, 10:39.38, 10:43.88, 10:44.99, 11:02.70

      As you can see, the 2015 results at 3k would suggest this year is going to be much stronger than the past two years for Jesuit: 3 returners under 10:30, whereas there were only 1 in each of the previous years; 5 returners on par with 2014's #3, and while 2013 had a stronger #4 they were worse by a larger margin that year at #5.

      Delete
    4. Addendum: I just checked to see how much extra weighting I would have to put on the factor most in favor of Sunset (22-team power merge score) in order for them to pass Jesuit for #1. You would have to make it 7x Power Merge in order for Sunset to sneak ahead (87.4 - 88.7). And at no level of weighting would Sunset be significantly ahead of Jesuit.

      In short: Going into the season, I would say Jesuit vs. Sunset should be the team battle to watch at the 6A Girls meet, unless there is an usual difference between incoming talent, and at this time I would give Jesuit the slight edge.

      Delete
    5. Thanks for the detailed reply. I hesitate to dwell on the narrow issue of Jesuit. They are a great program with a strong history, and they appear to have many talented runners. But I think it is useful case to illustrate a larger point, which is that the model may be a bit skewed in favor of past performance of programs vs. expected performance of individuals. I, for one, would be curious if projected results would more closely match actual results if the program history component were eliminated or minimized (30% is pretty large). Hard to know without looking back at prior years.

      It is true that the 2015 Jesuit track times are stronger than Jesuit 2013-14, but the relevant comparisons are Sunset, South Eugene, St. Mary's, et al. 2015. Making that comparison, and including track times to update xc times seems perfectly legitimate and will yield a projection based on the most relevant recent history. This would be a projection of expected individual performances of returning runners, from which one can produce a projection of expected team scores. If I understand your earlier comment, in that test Sunset would defeat Jesuit 57-86. But adding the program history would put Jesuit ahead.

      So the question is: does adding a 9 year history variable improve the predictive power of the model? I suspect not, but really the only way to know is to test it wrt past results (run current model against one that weights history at 0, 10, 20% and see which is the best fit).

      I get what you are saying about history as a proxy for incomers + improvement. But it seems likely this would have led to less accurate predictions the last several years on the girls side. Might be better to just say those are things we can't know. Or maybe publish two versions of projections? More to talk about then :)

      With a 9 year history variable, we are likely to see the same thing happen with Central on the boys side in OR. The class that just graduated (one of the strongest CC classes ever) will influence this model's projections for the next 6-7 years. No doubt, this is one of the strongest programs and coaches in the state, and they still have great runners. But they could overperform relative to reasonable expectations for the next 5-6 years, and still 'underperform' relative to the model...

      Thanks again for the replies and for the projections. Should be a fun year to watch.

      Delete
    6. I think it helps when I include it in my national rankings, and I think it would on the regional level as well. When dealing with a state as small as Oregon, you might be right that it doesn't help as much - it might have been better if I narrowed the analysis to the top 10-15 teams instead of 22, which would minimize the differences. However, I think it needs to be factored in to some degree, because it does help factor in things I can't really include otherwise.

      I compared Jesuit's track times compared to Sunset et al in my first response. I mentioned Jesuit's times vs. previous years because it is relevant when comparing how Jesuit tended to have a lack of success while still having good track times the last few years. I looked back even further, and IMO this year's returning group is better than any team Jesuit has had since their last state champion team. The point being: Jesuit has stronger returning track times vs. other programs, and has stronger returners this year than they've had in a long time. That’s definitely not something that should be forgotten when comparing all these runners going into this fall.

      But beyond that and returning to the point about program history... would you not agree that Jesuit has been catching up to Sunset since mid-season last year? Sunset has had the best three athletes (Griffiths, A. Huelskamp, Makin) that entire time, but Sunset's I. Huelskamp has gone from beating all the Jesuit runners to being mid-pack at best. Same with Corkran. Scorewise, if you treated track like an XC race and compared the times across the different distances, I would say the last 8 meetings (sans State track, where Sunset only had 3 athletes and Jesuit 4) went something along the lines of this: 17-41, 16-43, 21-36, 18-39, 26-29, 21-34, 29-26, 26-29. Point being: Jesuit athletes seem to have been improving better than the Sunset athletes over the last year, they found some potential young newcomers during track, and we still don't know what freshmen will show up and have an impact in the next couple months. Those are all edges that just looking at returning runners does not take into account, but could play a role in determining which team ends up being better this November - and if that's been a trend in the past, it is something that is accounted for by factoring in program history.

      The program history data goes back 9 years because that was when the OSAA added the extra classifications. Keep in mind, this is a long-range model, which means any single class is going to have less influence than the entirety of the rest of their classes. There are pros and cons of looking at longer and shorter terms, but keeping it to where you’re looking at the same general qualifying structure (4 races of 6 classifications and no years of 3 races with 4 classes) is probably most important.

      tl;dr - How strong a program is can influence the success of teams in the future, and while it may remain more accurate to do so on larger scales (bigger states, at the regional level, or at the national level) because if greater differences between the programs, the idea that some programs have done a better job of things like getting impact newcomers or developing their athletes is still very relevant.

      Jesuit is not ranked #1 only because they have a great history as a program, nor because they had a great track season. They are ranked #1 because their athletes have ran well enough to be at least the #2 team going into the season, and the history of their program suggests that they might be even better due to improvement and newcomers. If it was Sunset that has won 4 titles in 9 years and Jesuit had won 1, the pendulum would have swung the other way, but that isn't the case.

      I agree, it should be a fun year to watch.

      Delete
  2. Good point, the program history variable may be more helpful at some levels than others, and in more populous states. I do understand the rationale, just skeptical that it is helpful in all cases. I agree that the Jesuit runners improved a lot, though not enough to provide evidence of ascending to #1. It is very tough to beat a team if their top 3 finish ahead of your first. There would need to be a larger gap between each team's 4-5 than seems possible in this case. I think based on your individual rankings (Sunset 2-5-9, Jesuit 10-11-14), the math gets tough for Jesuit. Of course, they could have 2-3 incoming 9th graders that scramble things (as could South Eugene, St. Mary's, or Grant). I just think trying to include that sort of variable into a model ends up being like a guessing game. In any case, it should be a good battle on the girls side. Some very strong runners this year.

    ReplyDelete