Every time I write about the DL – now DLS – method, it gets complicated. Soon we’re caught in a maze of curves and equations and the interested reader gives up.
“Too technical!”, he declares, and then utters the inevitable invective against DLS.
But sometimes DLS does indeed surprise and bewilder. When Australia scored just 158/4 in 17 overs, why should India have to score 174 in the same 17 overs?
So, you explain: Australia thought they had 20 overs to bat … they were pacing their innings accordingly … the unexpected denial of the last 3 overs was definitely a loss of opportunity … so they deserved to be compensated …
Sure, sure … but by how much? Well, that’s where the idea of ‘resources’ kicks in. To score runs you need balls (overs) and wickets. Yes, not just balls or just wickets, but both. So ‘resource’ combines both.
How do you measure this ‘resource’? Well, there are models used to determine resource, there are equations, and there is also some undeniable genius.
So, what happened at Brisbane? Well, when Australia’s innings ended unexpectedly, they had only used up 80% of their available resource (this 80% is a guess; I don’t know how to access DLS resource tables).
But India too had only 17 overs to bat, so they weren’t getting 100% resources either. Correct, but they had all 10 batsmen and, therefore, the opportunity to use 90% of their resource (again, a guess). In other words, 10% more. To neutralize this advantage, they needed to score those 16 more runs.
But why 16? Why not 10? India would’ve have won if it had been 10 runs. How did DLS get 16?
Ah, ha, that’s determined by the resource model. Either DLS scaled up India’s 158 by 110% to get 174 (which is 16 more), or they assumed an average score of 160, and decided that Australia’s 10% disadvantage translated to 16 runs, and so India had to score 16 more.
What’s this ‘either-or’hypothesis? Aren’t all these rules explicitly defined? Well, they are, but only inside the DLS computer program. For most of mankind, however, it is just a black box.
Ok, back to our main discussion. And the bad news is that the DLS black box is getting blacker all the time. The big black day was when they needed something like DLS for T20. We already have DLS for 50-over ODIs, they said, so let’s just pretend that a T20 is an ODI game with the first 30 overs lost … and then you can apply dear old DLS comfortably.
Oh, really? Yes, really! To this day, DLS (with Stern providing the extra reassuring muscle) contend that ODI is the big sheep, and T20 is just the smaller sheep that otherwise behaves identically.
You might want to call them black sheep? Well, not really, because T20 is very probably a different animal.
The Brisbane outcome, and Melbourne even more – where the target, in response to Australia’s 132/7, wound down from 137 in 19 overs, to 90 in 11 overs, to 46 in 5 overs – became increasingly ridiculous. A model originally created when an ODI was played in more leisurely fashion, and when even the idea of T20 didn’t exist, can’t be ‘force-fitted’ to such absurd limits.
Even DLS will probably agree in private that their method is being twisted too much; something truly remarkable is being asked to behave rather irresponsibly.
And remarkable DL (yes, I’m saying only ‘DL’) indeed was. There were two clever innovations. First, the idea of the combined (wickets-left-and-overs-remaining) resource construct, with a resource table to accommodate it. And, second, the idea of devising a target-setting rule based on this table.
The first blip on the radar appeared with the target-setting rule; DL essentially ‘fixed’ this bug, rather than ‘solved’ it. The second blip, later, came when average ODI scores went up from 225 to 300 and higher. The resource table needed to become more ‘responsive’; DL fixed that quite cleverly although that meant having to use a computer. The third blip was that even this more improved method couldn’t effectively handle the final overs of a game. That’s where Stern presumably provided the solution.
However, and especially with T20, there are now far too many blips on the radar. The confusion at Brisbane and Melbourne – which, oddly, was both times during the first innings of the match, where DLS struggles more –was probably because of the first and third blips.
So, what’s the way out? To be honest, the entire methodology of building a rules-based mathematical model, and proceeding therefrom, has to change. The VJD method handles the first blip more comfortably, and can give comparable or occasionally better results, but it is essentially an idea from the same school, even though DLS keep pointing out VJD’s lack of ‘internal consistency’.
It is tempting, even tantalizing, to visualize a completely different solution using big data and AI. Remember DLS or VJD only factor in how many overs are left and how many wickets are lost. What about things like: Who wins the toss, where’s the match being played, how good are the lights, is there excessive dew, is the ball coming up to Gayle’s bat, how wide are the boundaries?
Listen to the commentators and experts (Gambhir talked of KKR winning strategies built around managing dot balls in an innings), or eavesdrop on team strategy meetings, or read up some of the brilliant data analysis by Jarrod Kimber, “Ramki”, Kartikeya Date, Tim Wigmore and so many others. Data is the big word. Surely a completely new data-based, rather than model-based, rain rule is around the corner.
Till that happens we must resign ourselves to more heartburn, or shamefaced glee, brought about by DLS. There really isn’t anything better right now across the horizon.
[The featured image is from Colac Herald]