So one important thing to note here, is that going to NET has created a significant shift to what is statistically optimal when it comes to scheduling and while I unfortunately don't have inner knowledge of the NET iterative formulas, just based off of empirical data the past few years, it appears Brad has the correct approach at least statistically speaking. So what is my reasoning for this? Well I'll try to be as brief as I can
Years ago, when SOS and RPI were the primarily tools of the selection committee, basically the really bad noncon teams on your schedule were millstones around your neck (this makes sense as almost all teams you play in conference are above .500 teams whose bulk of their schedules are against .500 teams). As such, because each game was equal weight, the worst teams you played served as outliers that absolutely tanked your SOS and RPI such that playing 300+ ranked teams was a no win situation.
On a somewhat similar note, when it came to efficiency based iterative systems like Kenpom (the one I am more intimately familiar with), Ken had to make a decision on how to deal with blowout games and runaway scores. The choice was to either to use unaltered efficiency stats (think margin of victory for the purpose of this explanation as it's a bit easier to wrap ones mind around) however this would actually result in blowouts being outliers for efficiency and as such would actually mean those games have higher relative weight than other games played on the schedule. Or he could limit and damp efficiency for high margin of victory games. He ultimately decided to damp where basically winning by more than about 30pts starts having extreme diminishing returns where say winning by 70 will have almost the exact same efficiency as winning by 40. Why? Because after a team is up by over 30, score effects start taking place- coaches empty benches, play style changes, etc. etc. As such again, playing 300+ ranked teams served basically no net benefit as since you were already expected to beat them by 30pts and you were capped at beating them at about 30pts, playing them couldn't improve your efficiency ratings, but they certainly could nuke them if you only won by say 10-15. In fact this was a known "issue" and it's why teams tried doing whatever they could to not schedule these type of games as scheduling a team ranked in the 225-275 range probability wise has just about the same likelihood of blowout without the negative effects.
Ok, so what's different with NET? While they've never released their actual formula (don't get me started), which in my opinion is fairly ridiculous in its non-transparency for the NCAA approved metric that the postseason is supposed to be on, from empirical evidence, one thing seems very clear, and it's that those scoring limiters are either not included or severely needed meaning that margin of victory has a much much much higher role in ranking than it used to.
Hence, in NET, clownstomping the worst team in college basketball by 60pts is worth way more than beating a 250th ranked team by 20. So while strategy used to be avoid 300+ ranked teams like the plague, NET actually encourages playing them and running up the score and then similarly, playing top 25 teams in the rest of your non-con to bolster opponent efficiency numbers while not having to worry about a non Q1 loss in the non-con.
TLDR: In summary, Brad was advised well in the NET era of how to schedule, and it is a departure from what we knew as fact even a few years ago. Also, NET is complete and utter trash, and the reason they don't release their formulas is so they can still adjust them behind the scenes and such that the stats community doesn't laugh it out of existence