Monday, March 24, 2014

Closing time

You don't have to go home, but you can't stay here.

This marks the official end of Bracketball's analysis of the 2013-14 season.  In reality, it ended at 6PM EST on Selection Sunday, but I had a group of reaction posts I wanted to write about the process.  I've gotten all of them up now.  Of course, I might think of something else I want to say, but for now, I'm done with the 2013-14 season.

Sometime in the summer I'll play around with a first preseason bracketology for next season, and I'll start talking about various scheduling quirks that pop up.  But otherwise, the blog'll go dark for awhile.

Sunday, March 23, 2014

Conference hierarchy, one more look

Remember the preseason when I posted my conference hierarchy?  Let me post my preseason rankings, and see how I did.

Tier 1:  Royalty
1) Big 10
2) ACC

The B1G ended up 2, but the ACC was 5th.  ACC had plenty of power at the top, and 9 of 15 teams were in the top 100.  The issue was that BC and VT bottomed out below 200.  Once again, the bottom hurts the rest of the conference.

Tier 2:  Power conferences
3) Pac-12
4) SEC
5) Big East
6) Big 12

Well, I missed on the Big 12.  #1 overall.  Only 10 teams large, everyone but TCU would up in the top 125, which is the recipe.  Nailed the Pac-12, and the BE finished 4th.  So that leaves the SEC, which finished 7th.  And not just 7th, but 7th by a lot.  There's a clear separation between the top 6 and the SEC.  They were a tier down from the other three conferences in this list.

Tier 3:  Purgatory, part 1
7) AAC

The AAC ended up 8th, about level with the SEC.  The bottom half was completely awful.

Tier 4:  Purgatory, part 2
8) MWC
9) A-10

The A-10 had a big season with 6 teams inside the RPI top 50.  Another 3 teams inside the top 100 was just as key, eliminating deadweight potential.  This got the A-10 to 6th overall.  The MWC, on the other hand, regressed badly to 10th.

Tier 5:  Don't call us mid-major
10) MVC
11) WCC

In a strange dynamic, the WCC beat the MWC, getting to 9th behind 4 top 70 teams.  They had better top-to-bottom depth than the MWC.  MVC had a really down year and still held 11th.  I think we can safely say that there is an 11-conference breakaway in college basketball when it comes to relevance.

I won't reprint the rest of the tiers, but things that stood out:
1) The MAC and CUSA were 12th and 13th, pretty clearly separating from the rest of the pack.  They're a tier of themselves now.
2) The next grouping of conferences:  Horizon, CAA, MAAC, Summit.  Summit was the outlier because NDSU overperformed.
3) Football money does not matter for the Sun Belt.  Finished 19th.
4) The WAC way overperformed in 21st place, because of New Mexico St.
5) I had the OVC 15th entering the year.  Finished 24th.  Oops.

So with all this in mind, here are my new tiers for future years:

Tier 1:  Big 10, ACC - not tempted enough to change this, yet
Tier 2:  Big 12, Pac-12, Big East, SEC - the order within this tier changes, but not the grouping
Tier 3:  A-10, MWC, AAC - this group is clearly below Tier 2.  AAC is about to get smacked with the realignment stick again, so they belong here
Tier 4:  WCC, MVC, CUSA - I'm going to give CUSA the bump up to this tier.  It's debatable.  I think we'll see a 12-conference breakaway now
Tier 5:  MAC, MAAC, Horizon, CAA - Conference numbers 13-16 here.  Just good enough to be ignored by the selection committee every year.
Tier 6:  Sun Belt, Summit, Ivy - Ivy is carving out a niche in the middle of the tiers here.  These are conferences #17-19.
Tier 7:  OVC, Patriot, Big West, A-Sun - Conferences #20-23.  These are the ones who can realistically hope to win a game in March every year, and who won't bottom out.  The common trait?  Top-tier teams who can remain constant threats (Murray St, Belmont, Boston, UCSB, LBSU, Mercer, FGCU).
Tier 8:  Big Sky, NEC, S'land, Big South, A-East, WAC, SoCon, MEAC - I see these guys being interchangeable going forward.

Ok, one more RPI note

I always think it's fascinating to see the highest RPIs left out of the tournament.  So here they are:

33 Southern Miss
38 Toledo
49 Missouri
50 Minnesota
53 SMU
54 Florida St
57 Belmont
58 Green Bay
59 Iona
61 St Mary's

Actually a pretty good mix of high-major and mid-major here.  No trend can be discerned from this.

Worst RPIs to get an at-large bid:

56 Iowa
55 North Carolina St
51 Kansas St
48 Nebraska
47 Xavier

Again, no real outlier.  This is a pretty quiet year.  Normally when I do these, there's more volatility and clearer biases.

Friday, March 21, 2014

A note on RPI

Some final thoughts on how RPI is used.

The selection committee has done a good job of ignoring individual teams' RPI in the selection process.  However, the RPI still matters when it comes to assembling lists of records vs. RPI Top 50, Top 100, and so forth.  If a metric is not good enough to be used for an individual team, but is good enough to be used to group said teams, isn't that contradictory? 

This is why I like to look at average RPI win and average RPI loss a bit more.  This helps balance out any imbalance in the numbers.  Beating a team twice with an RPI of 51 is fundamentally different than beating a team twice with an RPI of 100, but using average RPI win as a stat is the only way to get that to show up in the data, without looking at the actual list of results.

Which brings me to a fundamental issue with RPI and "bad losses" and "good wins". 

Let's take 4 teams.  Let's say Team A is undefeated, 1.000 winning percentage (30-0) and team B is a good .800 team (24-6).  Team C is a bad .300 team (9-21) and Team D is a really bad .050 team (0-20).  Records are uneven, but whatever, this is an illustrative example.

According to the way RPI is calculated, the RPI sees the difference between Teams A and B as being dead equal as the difference between C and D.  However, the difference in beating Team A against B is big.  Now, beating either Team A or B would be a signature win, but one is more signature than the other.

Now look at what happens with a win against Team C or D.  In either case, the public perception of the team doesn't change.  They beat a bad team.  However, the RPI sees a difference in beating the two teams, the same difference it would see between Teams A and B.

Let's say Team C has a 225 RPI and Team D has a 350 RPI (reasonable).  From public perception, the difference in wins is negligible, and a loss against C is just as harmful as a loss against D.  But according to the RPI's perception, the difference between C and D is large.

And therein lies the problem.  The public perception says any win over a team outside the top 150 is mostly useless in evaluation.  However, from the RPI formula's point of view, there's a big difference between a win over a RPI 175 team and a RPI 325 team.  This results in distorted RPIs that punish teams far too much for playing bad teams and doesn't reward enough for teams who play great teams

What the RPI needs is a weighted adjustment.  There shouldn't be a big difference between playing a RPI 225 and a RPI 350 team.  We should scale down the effect really bad teams have on RPIs compared to the merely below-average.  Similarly, we should be able to scale up wins against terrific teams.  Right now teams benefit more from avoiding bad teams than scheduling good teams.  We need to emphasize scheduling great opponents, while de-emphasizing the need to purge every single cupcake from the schedule.

This is something I hope someone takes a look at.  What happens if you replace every horrible team on SMU's schedule with, say, the RPI 225 team?  Take the 6 or so horrible teams, replace them with merely bad teams, give SMU easy wins in all of them...what happens to their SoS and RPI?  Do they make it in the tourney?  Perhaps.  And yet, SMU would have ended up with the exact same on-court results against either schedule.

Given the expansion of D-1 in recent years, it's worth exploring ways to minimizing penalties for playing the worst of the worst.  Non-con scheduling should be about finding the best games, not avoiding the worst games.  The emphasis point needs to change.

Thursday, March 20, 2014

A note on geography

So Wichita State got a loaded regional.  Here's why.

The committee likes to put teams into the bracket in such a way that travel is minimized for the teams and their fans.  So let's look down the seed lines and see how they did that.

Villanova, as the top 2 seed, got to be placed in the east.  Michigan was next, and obviously got placed in the midwest.

On the 3 line, Duke was the top 3 seed.  Geographically, their preferred region is the midwest, not east (least travel).

On the 4 line, Louisville was the top 4 seed.  Geographically, their preferred region is the midwest.

On the 8 line, Kentucky was the top 8 seed.  Geographically, their preferred region is the midwest.

So now we see the problem.  Wichita gets the toughest 8 seed, followed by the toughest 4, followed by the toughest 3 or the 2nd toughest 2.  That's imbalanced, period.  Wichita got a tougher draw than others.  Because of geography.  Because the committee tries so hard to keep everyone close to home.

The NCAA needs to stop this.  I get the concept behind their geographical-based methods.  But a fair bracket should be of primary importance.  Geography should be secondary to a fair and balanced bracket.  The NCAA needs to revisit their policy and introduce rules that force them to create more balanced regionals.

This problem also shows up in other ways.  The AAC has 4 teams in the tournament, but 3 would up in the same regional.  Cincy, Memphis, and UConn are all in the east regional.  Naturally, they all went there because they're eastern teams.  I'd rather see the NCAA reintroduce the rule that forces the top teams from each conference into different regionals.  It's not fair to a conference to be loaded up into a single regional.

The NCAA is almost there in terms of a pure, fair bracket.  They just need to de-emphasize geography just a little bit.

One more thing:  if you're the top team among teams in a seed line (say, Kentucky and the 8), you get geographic priority over the other 8s.  Therefore, you could make the argument a team would rather be the top team on a seed line instead of the last team on the above seed line.  You'd rather be the first 7 seed than the last 6 seed.  Because with the first 7 seed, you get geographic priority, and the last 6 gets the last available spot.  That needs to be fixed.

Tuesday, March 18, 2014

NIT/CBI/CIT projection analysis

Since I'm one of the few people stupid enough to offer projections for the other 3 tournaments, it's fair to look and see how I did.


Here's what I had:
Last 4 in:  Richmond, Indiana, Marquette, Maryland
Last 4 out:  LSU, St Mary's, Washington, Middle Tennessee
Off the board:  Indiana St, San Francisco, UTEP, Boise St, Ole Miss, Ohio, Cleveland St, UNLV

So I missed 4 teams in total - the entirety of my last 4 in.  I missed LSU, St Mary's, Indiana St, and San Francisco on the other end.

I can't complain too much about LSU - the one power conference team sitting at .500 in their conference, so I can see the logic.  I actually like, once again, the NIT leaning towards mid-majors and taking top teams from the MVC and WCC.  I didn't think they'd do it this year, especially with an Indiana team with some signature wins.  Richmond I think should've probably made it.

San Francisco and Indiana St were 2nd in their conferences, but their overall resume paled in comparison to the rest on the bubble.  I'm ok with them in the NIT, but not thrilled.

As far as seeding:
- I had Toledo as a 4, them a 6 and just in.  Toledo was never getting left out with their RPI but I found it interesting they way underseeded them.
- Georgetown and Green Bay - I had both as 2 seeds, the NCAA had them last 4 out...and 4 seeds, both, from the NIT.  Huh?
- On the flip side, I had Georgia as a 4 and the NIT had them as a 2.  Guess conference record really matters here.

Among the 28 teams I projected in, I got 10 seeds right and another 10 within 1 line.  That actually sucks, and proves the NIT selection committee is senile and/or unpredictable.

My last projections from March 10 had these CBI teams:
Buffalo, LaSalle, Wake Forest, Vanderbilt
Houston, Texas A&M, Northern Iowa, Cleveland St
Oregon St, UNLV, New Mexico St, Tulsa
Miami, Ole Miss, Manhattan, Towson

Only 2 of those teams wound up in the CBI.  Oregon St and Texas A&M.  LOL.

Now, to be fair, since those projections, 3 of the teams (NMSU, Tulsa, Manhattan) played their way into a slightly more important tournament.  Further, Cleveland St and Towson went to the CIT instead.

That leaves 9 teams (Buffalo, LaSalle, Wake, Vandy, Houston, UNI, UNLV, Miami, Ole Miss).  We already know a few teams publicly turned down bids, and I'm willing to bet all 9 actually turned down bids.

My last projections from March 10...I won't reprint them all for the sake of brevity.  3 of the 32 teams I had (Louisiana-Lafayette, American, Milwaukee) would up in the better tournament.  6 of the 32 teams (Wyoming, Illinois St, South Dakota St, Fresno St, UTEP, Hampton) I projected to the CIT would up in the CBI.  Amusing sidebar:  I projected Fresno at UTEP in the CIT and it the CBI instead.  So these 9 teams really don't count as misses.

Of the 23 remaining teams, I got 13 right in my projections.  I consider it highly likely many of those 10 turned down bids.

So the general lesson is that of the 45 teams I projected to be worthy of the CBI/CIT, almost all the power conference teams turned it down, and about half of the mid-majors either turned down bids or I mis-projected them.

My takeaway:  CBI projections are pointless, or if you do them, just take out most of the power conference teams.  CIT projections are actually do-able, with perhaps a bit more research on which schools typically turn down bids.

Monday, March 17, 2014

Final analysis

Let's get analysis from a man who basically had average performances this year.  I'm pretty much on the mean for all brackets, in just about every possible way to judge.  It's too bad I can't retroactively put the last 6 years on this blog; I think this was my worst year in awhile.  Where did it go all wrong?

1) The 1 line.  I'm surprised I'm the only one that leaned out and took Wisconsin.  Clearly, the committee was full of crap when it said conference record doesn't matter.  They say conference affiliation is inconsequential, yet they used arbitrary conference championships to justify Virginia, along with Michigan and Villanova as contenders, on the 1 line.

Michigan vs. Wisconsin - UM won the Big 10 by 3 games, on an imbalanced schedule.  They split their 2 games this season.  Wisconsin had the better SoS (#2 overall, #10 non-con; Michigan had a #83 non-con), better average win (95 vs. 109), and more top 100 wins.  Michigan had more top 50 wins in its favor.  Michigan had the worst loss (N-Charlotte).  It's very close, but Wisconsin has the merit.

Virginia vs. Wisconsin - Wisky has the SoS checkmark, although UVA (#28 overall, #35 non-con is close).  130 average win is substantially worse, though.  Only 4-4 against the top 50, 6 wins over tourney teams against Wisky's 8.  Virginia can toe the line with Wisky in some categories, but the only thing Virginia has that Wisconsin doesn't is dual ACC titles against a badly imbalanced ACC schedule and a Pitt-aided conference tourney run.  Wisky should get the checkmark here.  AND WISCONSIN BEAT VIRGINIA ON THE ROAD.

Villanova vs. Wisconsin - Nova does get the checkmark with bad loss avoidance, but again, an average win of 137 against Wisky's 95 looks pale.  SoS 34, non-con SoS 56 are solid but don't compare to Wisky.  Villanova has one win (N-Kansas) over a single digit seed.  Remember, the N-Iowa win evaporated.  I can't make a case here.

Iowa St vs. Wisconsin - ISU's SoS is 11, average win of 107, 9 top 50 wins, 15 top 100 wins.  All compare well.  This might be the one team with the best case to overtake Wisky.

So there.  That's my logic.  If you don't like it, deal with it.

2) The selection committee hates the American.  I was too high on Louisville by 1 line, Cincy by 1 line, UConn by 2 lines.  I thought they would apply the eye test a bit harder in each case.  Mostly, I'm ok with seeding them down, but I give credit to the committee for following through.

3) The A-10 got overvalued a bit.  They got carried away with the computers, in the same vein that the Mountain West did last year.  St Louis was clearly a case where they let the computer numbers guide them.  If you look, they dominated head-to-head results against the top 6 of the A-10.  That boosted them significantly.  They probably should have looked harder at the decent but not great non-con SoS and results.  They let the A-10 cannibalize itself, and rewarded them for it.  This is kind of true across the board.  However, if you're going to over-reward everyone, at least UMass and their 7 top 50 wins and 13 top 100 wins got a 6 seed.  That's fair.  And 13 road wins too!

4) North Carolina St is an awful selection.  The signature win is Syracuse on a neutral, and there's @Pitt and @Tennessee.  Ok, fair.  Road/neutral wins.  They're also 3-9 against the top 50, 6-11 against the top 100, and had a marginal non-con SoS (109).  Come on.  Other bubble teams had equal signature wins than N-Syracuse (Green Bay had Virginia, Cal had Arizona, SMU had Cincy, Nebraska had Wisky, etc etc), and were stronger in other aspects.

5) Green Bay probably deserved another look from me, although I wouldn't have put them in over BYU.  Did you realize they had the #52 non-con, putting Wisky, Virginia, and the Great Alaska Shootout on there?  At least they efforted.

6) SMU, schedule better.

7) The committee has a geography fetish.  I'll save that for the next post.

8) The committee just seems to randomly put together the bottom fourth of the bracket.  SFA on the 12 line?  Western Michigan on the 14 line?  WMU has 8 top 100 wins, you know.  SFA played 1 top 100 team.  At some point you have to punish EVERY team that plays a non-con in the 300s.  Next year, I have to remind myself to seed those lines based on RPI, because trying to analyze them actually took me further from the committee's results and killed my score.