Illinois #23 in 2/3 AP Poll

Status
Not open for further replies.
#54      
Beating OSU on national TV, in THE prime time slot of Sunday, trumped a mid-week fumble that was off the radar. Tomi coming back, out of nowhere, was an enormous game changer. We looked different.

Can't compete our situation Sunday at 11AM to now. A LOT changed, most notably our roster.
 
#55      
i see like 3? Texas AM, Kentucky, Oregon.....one of which isnt in t25
Fair, I thought there were more. A lot of teams I was thinking of were in the 9-11 Q1 games category. My bad, I'll take the L on that.

I should have just said what I really meant which is that I'm not sure "# of Q1 games" is that meaningful a statistic. Alabama and Iowa St. are 6-3 vs. Q1. Purdue is 6-5. So all three of those teams have fewer Q1 games, but the same number of Q1 wins. Are those resumes worse than ours, or better?

The goal going forward should be to do better than .500 vs. Q1. If the team plays like they did against OSU, I think that should not be a problem, and the rankings will take care of themselves.
 
#56      
Fair, I thought there were more. A lot of teams I was thinking of were in the 9-11 Q1 games category. My bad, I'll take the L on that.

I should have just said what I really meant which is that I'm not sure "# of Q1 games" is that meaningful a statistic. Alabama and Iowa St. are 6-3 vs. Q1. Purdue is 6-5. So all three of those teams have fewer Q1 games, but the same number of Q1 wins. Are those resumes worse than ours, or better?

The goal going forward should be to do better than .500 vs. Q1. If the team plays like they did against OSU, I think that should not be a problem, and the rankings will take care of themselves.
Yeah my point was having a higher number of Q1 games should correlate to more losses to kind of "explain" how we are in t25 even with 7 losses..

IE theres probably teams behind us with 5-6 losses that have played significanty less Q1 games, or vice versa..

KU for example is ahead of us and 6 losses with a 4-5 Q1 record...if they play 3 more Q1 games instead of the q2/q3/q4 they might have 7 or even 8 Ls...

Same with a team like uconn who has 6 Ls but only played 7 Q1s..replace 5 of their Q2-Q4 with Q1 and they join us in the 7+ L club based on their q1 record this year


It wasnt a "we are underanked or our resume is automatically better" argument more of "yeah we have 7 Ls but we've had more tough games than many and thats why we still are ranked "

Agree on teams like purdue/bama and their resumes and that we should strive to be like 66% or better in Q1 games
 
Last edited:
#58      
Amazon Football GIF by NFL On Prime Video
 
#59      
How would predictive metrics not be based on results anyhow? That's the only data that is available, whatsoever, is data from the games that have been played.

Which result-based metrics are 'lousy'?

EDIT: Found my answer by doing my own research. Those are both result-based and predictive as they use past game results, strength of schedule, game location, among other factors to generate rankings/ratings that are designed to predict future performance.

So those ratings I listed are all absolutely 100 percent result-based (and also predictive in the vein that you can use the ratings to predict future outcomes).

So on the NCAA tournament team sheets, predictive metrics are BPI, KenPom, Torvik and results-based are WAB and SOR.

Miya also just put out a results-based Resume Quality where we land 39th.


Warren Nolan's show the breakdown between the groups: https://www.warrennolan.com/basketball/2024/net-teamsheets-plus

You'll notice we're much lower in the results-based ones than the predictive ones.
 
#60      
So on the NCAA tournament team sheets, predictive metrics are BPI, KenPom, Torvik and results-based are WAB and SOR.

Miya also just put out a results-based Resume Quality where we land 39th.


Warren Nolan's show the breakdown between the groups: https://www.warrennolan.com/basketball/2024/net-teamsheets-plus

You'll notice we're much lower in the results-based ones than the predictive ones.

I need to dig into this further because you can't have predictive analysis without direct results-based data. Otherwise, what is it based on?
 
#63      
I mean, I was one of the most pessimistic fans out there after the Nebraska debacle, but that is because I follow this team closely, desperately want them to win each game and am close enough to see troubling patterns develop. However, if you just look at it objectively...

1. These voters thought we were the #18 team in the nation at the beginning of the week.
2. We lost a Quad 1 game in OT at Nebraska, still without our starting center.
3. We came home and won a Quad 1 game vs. Ohio State, now with our full roster.

The vast majority would PROBABLY say OSU is better than Nebraska, so dropping us five spots for going 1-1 in those two games isn't exactly as overly forgiving as it might appear. I predict if we go 2-0 this week, we will be back closer to #18 or even higher (depending on other results) when UCLA and MSU come to town. However, if don't win twice this week, we will probably cement ourselves as undeserving of a top 25 ranking for too many voters ... can't go 1-1 every week.
But if we can decisively win both, the BIG is gonna *gulp*
 
#64      
The placement of some of these teams above us is baffling. Don't understand the Wiscy love. Kansas didn't drop nearly as much as they should've and I think it's just because it's Kansas. Mississippi State has been awful recently.

Not arguing that we really should be much higher (Michigan and Ole Miss should be above us IMO), but man the AP voters are inconsistent.
The AP in math terms is a trailing average. It slowly catches up to the net and (lesser extent) the Kenpom by years end. It especially starts moving closer now deeper into February. Been like that for awhile now.
 
#67      
I seem to remember some rare instances a team from the big10 made the tournament being under .500 in conference play.
 
#68      
They'll never be consistent or reliable. You have journalists voting that haven't seen most of the teams they vote for even play.
 
#69      
How would predictive metrics not be based on results anyhow? That's the only data that is available, whatsoever, is data from the games that have been played.

Which result-based metrics are 'lousy'?

EDIT: Found my answer by doing my own research. Those are both result-based and predictive as they use past game results, strength of schedule, game location, among other factors to generate rankings/ratings that are designed to predict future performance.

So those ratings I listed are all absolutely 100 percent result-based (and also predictive in the vein that you can use the ratings to predict future outcomes).
They're quite literally grouped as "Result-Based Metrics" and "Predictive Metrics" on the Team Sheet.

Result-Based Metrics:
KPI: 24
SOR: 37
WAB: 28

Predictive Metrics:
BPI: 10
POM: 13
T-Rank: 7

To the extent that, yes, something happens on the court to get the "predictive" metrics, you're right, but the difference is they are more agnostic to actual wins and losses (1 point win or 1 point loss doesn't move the needle), while the result-based metrics focus heavily on "who did you beat or lose to, and how good or bad are the teams you beat or lost to".

Your result-based metrics are your "resume". The predictive metrics are more so "how good do we think you are?"
 
#70      
They're quite literally grouped as "Result-Based Metrics" and "Predictive Metrics" on the Team Sheet.

Result-Based Metrics:
KPI: 24
SOR: 37
WAB: 28

Predictive Metrics:
BPI: 10
POM: 13
T-Rank: 7

To the extent that, yes, something happens on the court to get the "predictive" metrics, you're right, but the difference is they are more agnostic to actual wins and losses (1 point win or 1 point loss doesn't move the needle), while the result-based metrics focus heavily on "who did you beat or lose to, and how good or bad are the teams you beat or lost to".

Your result-based metrics are your "resume". The predictive metrics are more so "how good do we think you are?"

I understand it fully. My point is that you cannot arrive at any predictive conclusions without using data from actual game results.
 
#72      
So, you just don't like how these categories are named?

I mean, I guess? Until someone explains how you generate numbers that accurately predict future performance without taking any result-based data whatsoever into account.
 
#74      
Has anybody claimed that?

That's the general sentiment I gather from the conversation that was had.

Editing to add that I think the nuance may be that our upcoming SOS is not as tough as the SOS of the games we've already played and that is the reason for the differences we see.

Lastly, are player rotation minutes considered? (meaning key players missing games).
 
Last edited:
#75      
That's the general sentiment I gather from the conversation that was had.
Pretty sure nobody has suggested that predictive metrics don't use data.

To clear it up, they do, but that data is geared towards efficiency, not wins/losses. Offensive efficiency. Defensive efficiency. They don't care about wins and losses at all. If hypothetically team A wins every single game by 1 point and team B with the same schedule loses every single game by 1 point, a predictive metric would rate them as being pretty close to each other, whereas a results based metric would have them on completely opposite ends of the spectrum, because results based metrics value wins and losses.
 
Status
Not open for further replies.
Back