Home

Q: I have a confession to make. I love cricket, I watch cricket, I understand cricket, but I still can’t fathom this D/L method.

SB: That’s not particularly surprising. Many cricket fans don’t understand D/L, but most pretend that they do.

Q: I of course know that we need D/L when an ODI match is curtailed by bad weather, and we need to reset the winning target.

SB: If both teams get to complete their 50-over innings there’s no problem. The team that scores more runs wins. But suppose the team batting first scores 255 in their 50 overs, and the team that is chasing is at 125/2 or 125/5 when rain stops play. Which team do you think should win?

Q: I remember this example! The team chasing had to maintain a run rate of 255/50 = 5.1 per over. So after 25 overs it should have scored 25*5.1 = 127.5 (rounded up to 128) to win. Since it had scored only 125, it lost.

SB: Do you consider that to be a fair verdict? Take the extreme case when the team chasing is 128/9 when rain ends play. They were probably just one ball away from a horrible defeat – and yet they are declared winners!

Q: I agree that makes no sense, but what else can one do?

SB: Think about it. What’s different between 125/2 and 125/5? Of course, the number of wickets! Wickets matter too. A team’s ability to win depends not just on the number of overs (or balls) remaining, but also on the number of wickets left.

Q: I agree. But how do you combine the two?

SB: That’s exactly the problem that Frank Duckworth and Tony Lewis solved in the mid-1990s. And very elegantly too!

Q: How?

They came up with the idea of a ‘combined resource percentage’. When you commence the innings, with all 10 wickets and all 50 overs, you have 100% resource. And, when you lose all 10 wickets or play out all 50 overs, you have 0% resource. Resource depletes on a ball-by-ball basis as the match progresses. When a wicket falls, the resource percentage drops rather more steeply. When you are at 125/2 after 25 overs, you’ve probably used up 40% of your available resource, but if you are at 125/5 after 25 – and have lost 3 more tickets – your resource depletion may be as high as 60%.

D/L was also the first to talk of a ‘par score’, i.e., what you need to score to just edge past the winning line. At 125/2 in 25 overs, you are well past the winning line if you are chasing 255; at 125/5 you are well behind.

Q: Yes, I understand all that. But how do you calculate the actual resource percentage?

SB: Well, that was essentially the genius of D/L. They asked the following key question (and don’t let the notation upset you): how many more runs is a team likely to score if it has u overs remaining (u can be 50,49,48 …3,2,1 or 0) and has so far lost w wickets (w can be 0,1,2 .. or 9). They denoted this number Z(u,w) and used archival one-day cricket data to model Z(u,w). Not surprisingly, they modeled using an exponential decay function which had a smooth, orderly and ‘controllable’ descent. They needed a curve with that sort of behavior because, as the innings progresses, Z(u,w) must decrease continually and consistently. The combined resource percentage was then calculated using the ratio Z(u,w)/Z(50,0); note that this percentage drops from 100 at the beginning of the innings to 0 at the end.

Q: Ah, so these were the strange percentages in the D/L resource table!

DLTableSB: I remember being myself daunted by those tables (see adjoining condensed table view). But today it all seems quite simple; this is just an array with 300 rows (one row for every valid ball; there are 50*6 = 300 valid balls) and 10 columns (corresponding to 0,1,2 .. 9 wickets lost).

Q: Ok, you now have the resource percentage table. But how do you actually reset the winning target after an interruption.

SB: To explain the way D/L calculates the winning target, we’ll need some simple notation. Let S be the first team’s score, and let R1 be the resource that was available to the first team (if all 50 overs are bowled, or all 10 wickets fall then R1=100; but if the first innings was interrupted with a score of 188/5 after 42 overs, then clearly R1 < 100). Let us suppose that the second team had an available resource of R2 (R2 < 100) when the innings is interrupted. Then, if R1 > R2, the reset target T = S* (R2/R1). If, however, R2 > R1 then T = S + (R2 R1)* G50, where G50 is the average number of runs scored in a 50-over innings, and now assumed to be 245. This rule works for multiple interruptions, and for interruptions at different times in the innings: between innings, during the second innings, or during the first innings itself.

Q: Is the timing of the interruption so important?

SB: Oh, very much so. Every innings is played at a certain pace and with a certain strategy – so an interruption almost always tends to favor one team more than the other. Creators of earlier rain rules failed to address this problem, or even recognize it; D/L, on the other hand, addressed it most admirably.

DLHeaderFor example, think of the most productive overs (MPO) rule used in the 1992 WC that led to the horrific situation where a target of 22 runs in 13 balls suddenly became 22 runs in 1 ball. The MPO rule could only work sensibly if the interruption happened between the innings. If there were interruptions in the second innings it made the task of winning progressively harder for the team chasing; the chasing team was in effect being penalized for bowling maidens or good overs in which they conceded just 1 or 2 runs. And, in the pre-D/L days, interruptions during the first innings weren’t even considered in the calculation even though we now know that such interruptions deeply influence the equilibrium of opportunity for the two teams.

Q: Let’s return to the D/L rule for a moment. I’m puzzled why there should be different rules depending on if R1 is greater or less than R2.

SB: That’s a blemish, if not a weakness. The simple answer is that the target T could scale up uncontrollably if R2 >> R1. Suppose the first team has scored 80/0 in 20 overs and rain reduces it to a 20-over a side match. What should be the target for the second team? It turns out that R1 = 22.9 (the team had only batted 20 overs, and had all 10 wickets available) while R2 = 58.9. So a scale-up would have set the second team a 20-over target of 80* (58.9/22.9) = 205.8 (scaled up to 206) which was obviously ridiculous. The D/L rule sets the chasing team a less ridiculous target of 80 + (58.9 – 22.9) * 245 = 169 in 20 overs.

Q: Still something doesn’t feel quite right.

SB: Isn’t that always the dilemma that most models face? Some intemperate behavior in extreme situations always ruins the beauty and the elegance of the formulation; it would indeed have been wonderful if we had a simple D/L rule that could scale up or scale down seamlessly. D/L is further handicapped because limited-over cricket is evolving into a completely different animal.

Q: How did D/L come up with their model? What was their rationale?

 SB: Duckworth and Lewis went about their business like two old-fashioned professors of mathematics. Their “Eureka!” moment was when Frank Duckworth scribbled the following generic equation: Z(u,w) = Z0 F(w)[1exp {-bu/F(w)}], where Z0 is the average total score if there wasn’t the 50-over restriction, b is an exponential decay constant (needed because as the overs u increase, there is a diminishing return in terms of runs), and F(w) (0 < F(w) < 1) is the fraction that models how the propensity to score more runs diminishes as the wickets fall. One might guess that F(4) is probably about 0.5, because after losing 4 wickets a team has probably halved its propensity to score more runs. It is easy to see that F(0) = 1.

The D/L model essentially involves ten equations (corresponding to w = 0,1,2 …8,9). So we have equations for Z(u,0), Z(u,1) etc., etc. Z(u,0) for instance denotes how many more runs is a team likely to score if it has u overs remaining (u can be 50,49,48 …3,2,1 or 0) and 0 wickets lost. The D/L equation says that Z(u,0) equals Z0 [1 – exp {-bu}]. So if Z0 equals 260, then, depending on the choice of b, Z(0,0) might equal 225. Likewise Z(u,1) equals Z0 F(1) [1 – exp {-bu/F(1)}]. So if F(1) = 0.9, Z(0,1) might equal about 210.

 Q: I’m sorry but all these equations are overwhelming me.

DLCurveSB: Let me explain using a famous Wikipedia D/L picture. Just as we said, there are ten curves here. But instead of curves corresponding to Z(u,0), Z(u,1), … Z(u,8), Z(u,9), the plot here shows curves corresponding to combined resource percentages, obtained after dividing by Z(50,0). The top curve corresponds to Z(u,0)/Z(50,0), the next curve to Z(u,1) /Z(50,0), …and so on to Z(u,8) /Z(50,0) and Z(u,9)/Z(50,0).

At the start of the innings, the team has all 50 overs to bat, and all 10 wickets in hand. So it starts off with a resource percentage of 100, i.e., at the top left corner. Just to make it easy, pretend that there is an ant at this top left corner. After every ball is bowled, this ant moves one step to the right along the top curve. And so it continues, till a wicket falls. When a wicket falls, the ant vertically drops down to the curve immediately below (corresponding to 1 wicket lost). When all 50 overs are completed, or all 10 wickets are lost, the ant will end up at the bottom right corner.

This picture tells us many stories. Two are most noteworthy: (a) by how much does the ant drop after a wicket falls (this is the effect of F(w) kicking in), and (b) although every curve terminates at the bottom right corner, its ‘rate’ of descent can be more or less ‘leisurely’ (based on values picked for b and F(w)).

DLCurveWithDiagonalFinally, it is also possible to draw a straight line joining the top left and bottom right corner. A moment’s reflection will suggest that this straight line corresponds to the simple run rate method – in which the resource diminishes only in proportion to the number of overs, without considering wickets.

While we are looking at this picture, let us also visualize how interruptions look like. Think of the ant again. As long as the game is on, and evolving, the ant is always on the move. Suppose there is an interruption after over 30, and 10 overs are lost. Then, when the match resumes, the ant ‘fast-forwards’ along the same curve, moving to the right by a distance equivalent to 10 overs, before resuming its ‘play’ mode.

Q: Thank you, that was helpful. But, tell me, how good is the D/L method? Are all these painful exponential decay functions really necessary?

SB: Recent work by McHale and Asif (2012) [1] suggests that exponential decay functions were not the best choice. But you have to concede that what Duckworth and Lewis did twenty years ago was truly remarkable. There have been blemishes, and hiccups, but D/L truly changed the cricket playing field.

Q: What would you classify as a big D/L weakness?

DLCupSB: In the early years, D/L had a serious problem if the team batting first made a massive score, and … in fact, let me explain this using a very famous example. This was the harrowing moment in the 2003 WC final between India and Australia. Ponting’s Australia scored a mammoth 359/2 batting first. In reply, a rampaging Sehwag had taken India to 145/3 in 23 overs under thick clouds that promised heavy rain. If the match had ended with India at 159/3 in 25 overs, Ganguly – not Ponting – would have held the World Cup aloft! That would have been a complete travesty of justice.

Q: Why? What was the problem?

SB: The real problem was that the D/L model was simply not equipped to cope with massive first innings totals. The model assumed an average 50-over score of about 225, and its inherent robustness allowed a variation of +/- 50 runs around this average. But it couldn’t cope comfortably with scores well over 300.

Look at the D/L chart again. At the 25-over mark, only 30-40% of the resource is used if you have lost just 0-3 wickets. If the first team has scored 350 this translates to a par score as low as 105-140. That’s why something like 140/3 in 25 overs could win you the match even if you are chasing 350.

Q: Oh yes, I see that. So what’s the way out?

SB: Elementary! The higher the first team’s score, the faster the resource must deplete. That means the descent of the curves must become ‘less leisurely’ – they must slope down faster! Think of an extreme case when the first team scores 600 runs. What’s the best strategy for the chasing team? Just come out and start trying to hit sixes or fours. Every ball must contribute significantly to the tally, and if you must sacrifice wickets so be it! It reminds me of our childhood maxim while playing cricket: “Hit out or get out!”

Q: So how do you do that?

SB: Duckworth and Lewis labored hard with this one [2]. They modified their model, making it look even more ghastly.

Look at this: Z(u,0,L) = Z0F(w) LnF(w)+1{1-exp(-bu/[LnF(w)F(w))]}.

Q: Phew! What’s this L?

 DLCurveWithKnobSB: You can informally think of L as a kind of ‘turning knob’ that you fix at the bottom right of our resource curves, and pretend that the ‘thread’ of each of the ten curves – that terminate at the bottom right – is fastened to this knob. It is now quite simple; the higher the first team scores, the more you tighten the L knob. This will make the curves slope down faster … and therefore raise the par score higher. In the limiting case, we’ll be back to the good old run rate rule.

Q: But, wait a minute! As soon as you turn your L knob, your resource percentage table changes! So we are no longer looking at a single table with 300 rows and 10 columns. And I’m also presuming that there will be a severe computational overhead.

SB: That’s correct. The combined resource percentages will change. And while it may be hard to call the computation overhead ‘severe’, there’s no doubt that with this change, the D/L targets can no longer be calculated at the back of an envelope. You’ll now need a computer.

Q: So is this the so-called Professional Edition of D/L? I’ve always been confused with all this talk of Standard vs Professional Edition.

SB: Yes, the D/L edition with the L in the equation is the Professional Edition. All international matches now use the D/L Professional Edition, although the Standard Edition is still used for smaller games. In most cases, you won’t need to turn the L knob unless the first team’s score exceeds 235 or 245.

Q: Do you still use the G50 criterion – with different rules based on R1:R2 parity – in the Professional Edition?

SB: There’s no clarity on this question. The ICC official website says we don’t use G50 in the Professional Edition, but the Duckworth-Lewis book, published in 2011, is somewhat ambiguous on this question. I’m guessing that D/L initially decided they don’t need G50 in the Professional Edition, but then encountered rare, but feasible, scenarios that gave ridiculous targets …and so they quietly brought it back.

Q: I see that as a second D/L weakness. They can’t get the G50 monkey off their back!

SB: It is just possible that McHale and Asif might have found an answer to that one. The duo revisits the original D/L model and asks if there’s a way to tweak it to obtain better behavior. They come up with a better model for F(w) and suggest – what many had already suspected – that the D/L F(w) exhibits “erratic patterns”, They further argue that the exponential fit for Z(u,w) wasn’t such a good idea at all because the curves sink too rapidly at the end; a distribution function with a heavier tail, that exhibits a more leisurely dip, is much better.

Q: This seems like a complete overhaul!

SB: Yes, while retaining the outer D/L shell, McHale and Asif appear to have completely refurbished the D/L interiors. To handle very high first team totals, they too recommend the L criterion … but because the McHale-Asif F(W) and Z(u,w) are better-modeled, they find that their revised model can comfortably scale up without giving ridiculously high targets in all situations. The G50 monkey could finally be off the D/L back!

Q: So whither D/L?

SB: You want my frank answer? The D/L Professional Edition – perhaps with the McHale-Asif correction – could continue in 50-over games, because it has given a good account of itself over almost two decades. But I think it is time for D/L to retire in T20 cricket. Even Sachin Tendulkar had to retire one day!

Q: Why do you say that?

DLT20SB: Because T20 is a very different animal. Let me explain how D/L works in T20. Take another look at our resource curve picture. For a T20 game we basically use only the parts of the curve starting from the 20 overs remaining start line and move right to 0 overs remaining; we pretend that a T20 game is just a 50-over game in which the first 30 overs are lost.

Q: Well, what’s wrong with that?

SB: If a child wants a 20-inch long trouser, how sensible is it to just cut off 30 inches of cloth from his father’s 50-inch long trouser? That’s what they are trying to do … when they should probably call in a tailor to make a new pair of trousers! Purely as an academic exercise [3], I requested Professor Rajeeva Karandikar, Director of the Chennai Mathematical Institute, to examine if it might be better to ‘shrink’ the 50-inch trouser to a 20-inch one; we were basically asking if 50-over and 20-over games have the same curves – except for the fact that everything happens faster in T20. We found examples where the shrunk trousers performed better than the cut trousers, but the method basically highlighted the folly of trying to ‘force fit’ the only available size.

Q: But isn’t it true that we just have one available trouser size, the D/L trouser size?

SB: That’s a myth, that’s a big, big myth! For over a decade now, V Jayadevan’s VJD method [4] has proved to be a worthy rival to the D/L method [5]. The VJD method never talks of the actual number of overs; it always looks at the percentage of overs. The VJD method also doesn’t need the G50 construct; so it can effortlessly scale R1:R2 up or down as required. There have been many studies comparing the D/L and VJD models in T20, but in most cases the reset targets are very similar – I suspect VJD will do much better if we have situations like the first team being 60/0 in 6 overs or 80/0 in 8 overs, and the chasing team requiring a target for 6 or 8 overs, but such situations seldom arise in practice.

DLDhoniQ: Could we just go back to the simple run rate rule for T20s?

SB: I believe the ICC seriously considered this option at one time, and it is indeed clear that wickets matter much less than overs in T20 cricket, but T20 too has its own distinctive evolution model, especially with the field restrictions in the first 6 overs. Look at the way Dhoni finishes a T20 match – he bats so differently even between the 17th and the 20th over. I have no doubt that T20 is a different animal that needs a different model.

Q: What could that model be? Why don’t we trust Duckworth and Lewis to come up with a variant of their model?

 SB: If you read the McHale-Asif paper they are suggesting that their modification of D/L is exactly the variant that we are seeking! They report detailed studies that indicate that the scoring pattern in T20 and in the last 20 overs of a 50-over game is not statistically different. D/L’s difficulties, they argue, are because of some inherent weaknesses in the original D/L model and not because the scoring patterns are different.

Q: Do you agree?

SB: Actually, I don’t … even though it is always hard to ignore what cold numbers are saying. I encourage you to see video replays of the last 20 overs of a 1995 one-day match, and compare that with any match from the 2014 T20 World Cup. Everything looks so very different!

Q: But the model …

SB: We often make that mistake; we elevate a model to something that’s so hallowed and sacrosanct that it’s almost sacrilege to question it! A model can at best mimic a certain reality at a certain point of time. But it has limitations! Look at our D/L story. First they couldn’t get their R1:R2 rule right, then they struggled with high first innings scores, and now they are struggling with T20! I agree that McHale-Asif have made a significant modification, but everything’s getting way too messy.

Q: Could we come up with something neater and cleaner?

SB: If you think about it, the end objective is simply to come up with a 300 x 10 array of resource percentages in which resource depletes after every ball and every wicket in a reasonable cricketing sort of way. And in a T20 game we’re looking at an even smaller 120 x 10 array. So you do sometimes wonder if we need so much mathematical gymnastics; there must be so many different intelligent ways to obtain a suitable 120 x 10 array.

In fact, I’m reading a preprint [6] by Ganesh Natarajan of IIT, Guhawati that appears promising. He’s proposing the well-known, and sufficiently universal, Kleiber three-quarter power rule B ~ M3/4 where M is the body mass (read suitable combination of overs remaining and wickets lost) and B is the metabolism rate (read propensity to score). He then represents M as a product of the overs remaining function f(O) and the wickets lost function g(W) and obtains B = [f(O). g(W)]3/4. Using heuristic models for f(O) and g(W) satisfying the resources-must-monotonically-drop-after-every-ball-and-every-wicket constraint he sets up the required 120 x 10 array that gives quite excellent targets in most situations.

Q: Isn’t that kind of unusual?

SB: It is, because D/L has taken a different route … and that’s the route we’re more familiar with. What’s so sacred about an exponential decay function anyway, especially with McHale and Asif now telling us that a truncated Cauchy distribution function models the runs-remaining-to-be-scored variable so much better than D/L’s chosen exponential distribution?

D/L were presumed to be on a smart wicket because their coefficients (always secret!) were chosen to fit actual one-day match scores. But one-day scores have changed so much over the last 20 years, and will continue to change even more in the next 20 years! The complicated D/L parametric model that needs some regular surgery to stay relevant isn’t such a great idea. And if you must have a parametric model they why not something that’s so much simpler like the Kleiber law that has a good record as a generic model?

Q: Do I sense that you are rejecting parametric models?

SB: The game is changing too much; we need something that’ll more easily accommodate change. Also while all models concentrate only on ‘runs’, ‘overs’ and ‘wickets’, there’s a fourth variable that’s just as important: ‘conditions’. Who wins the toss, where’s the match being played, how good are the lights, is there excessive dew, is the ball coming up to the bat, is it is slow dusty wicket … all these factors are so important! We choose to ignore them because these are the hardest-to-model variables; I honestly see no easy way to incorporate playing conditions in a parametric model.

Q: What then?

SB: Go non-parametric. Don’t use impossibly complicated mathematical equations that need constant care and correction. Come up with resource tables that can be changed more easily … we could even come up with tables that ‘learn’ and ‘adapt’.

Q: I remember reading a couple of D/L T20 studies by Tim Swartz and others at the Simon Fraser University (SFU), Canada.

SB: Yes, that’s the example I had in mind, and this is a study [7] from a professor of statistics, not mathematics! The Swartz team set out to create the 120 x 10 array of resources that best mimic the run scoring pattern in T20 cricket. Instead of exponential or truncated Cauchy parametric models, they tried constrained optimization. When that posed problems at the extremities, they used Bayesian techniques. When they encountered missing data points, they used simulation. The tool sets they used (such as MCMC) are so much more powerful and versatile.

Q: But do they work better?

SB: Fair question! And a fair answer is probably yes. The SFU table was created using data from over 350 T20 matches, including the first four IPL seasons. This is data from real T20 matches; not from 50-over matches pretending to be T20 matches! Let’s look at two examples.

In the England – West Indies 2009 World Cup game, in reply to England’s 161/6 in 20 overs, D/L’s Professional Edition required West Indies (WI) to only score 80 in 9 overs to win. WI reached 82/5 in just 8.2 overs to win easily much to England captain Paul Collingwood’s dismay. The SFU model would have set WI a higher target of 86 runs in 9 overs.

In another England – West Indies game in 2010, England scored 191/5 in 20 overs. After a severe rain interruption, WI were left with only 6 overs to bat and required to score 60 runs by the D/L Professional Edition. They hit off the required runs in 5.5 overs. The SFU model would have set WI a higher target of 74 in 6 overs.

Q: How does the SFU resource table compare with the D/L table?

DLHeatSB: The best way to compare is to look at a ‘heat chart’; the redder the cell, the greater is the disparity between SFU and D/L (at its ‘reddest’ the disparity is 8%). Of course the disparity could be in either direction – so D/L can have either significantly more or significantly fewer resources than SFU in the redder regions.

Q: I’m generally seeing more red along the diagonal.

SB: The red along the diagonal is because D/L has more resources available than SFU. If you have more D/L resources available, it means D/L expects that you can score more runs … and therefore your D/L par score tends to be lower. That’s why D/L set relatively lower targets than SFU in our two examples.

Q: But there’s also a fair bit of red at the upper left corner.

SB: It turns out that the red in the upper left corner is because D/L has fewer resources available than SFU. The upper left corner corresponds to the start of a T20 innings, and SFU expects batting to be more circumspect at the start of the innings – so SFU reckons there’s lower resource depletion at this stage, and therefore greater availability.

Q: Still looks to me like a work in progress. I also notice that the SFU effort is just about modifying the combined resource table for T20. But what’s their rule for resetting targets?

SB: I agree that there’s much more work to be done in this non-parametric approach, but there’s also more opportunity here. Would that ‘best’ rain rule – when invented – be parametric or non-parametric? I’ve no doubt it will be the latter especially as we enter the age of big data.

As for resetting targets, I’m guessing that the SFU model will still use the D/L rule involving R1, R2, S and T, and iteratively calculate the resource available after every interruption in the D/L way.

Q: With or without the G50 (or G20) criterion?

SB: Again, I really don’t know. It may be possible to manufacture an example where the SFU resource table also suffers from the ‘unreasonably high’ projection problem. We also need to figure out if reset targets after multiple interruptions continue to be consistent and well-behaved. One difficult test for non-parametric resource tables is to stay internally consistent when there’s an interruption just after the second innings begins; it is here that ‘smooth’ parametric models like D/L perform their best.

Q: Finally, what are you seeing in your crystal ball?

SB: The D/L method is built around two admirable components: (a) creating a resource table that best models the evolution of a limited overs cricket game and (b) creating a rule to obtain the par score that works well for all interruption scenarios. Over time, I see D/L’s resource tables becoming inadequate, but D/L’s par score idea will thrive and prosper.

Q: How would future resource tables look like?

SB: I think D/L’s resource tables for 50-over matches are here to stay, although modifying them based on ideas suggested by McHale-Asif would make sense. [2017 update: Just before the World Cup in 2015, D/L became DLS after incorporating the Stern correction. The Stern construct essentially addresses the same concerns articulated by McHale-Asif, but it uses a Beta distribution instead of the truncated Cauchy]. No one’s going to try very hard to change these resource tables because the 50-over game is slowly but surely fading away. There will still be excitement for the World Cup, but that will be ephemeral.

The resource table for the T20 game, on the other hand, needs urgent attention. Here there might be a tussle between parametric vs non-parametric models, but non-parametric models – perhaps with priors from a generic parametric model – will eventually rule because they are much better suited to adapt to change … and no one can deny that T20 will keep changing. We must, for example, consider how to bring in ‘playing conditions’ variables into the mix because they significantly impact the outcome.

Q: But D/L’s rule to obtain a par score will stay?

SB: It will, as it must. I know we must ensure that R2/R1 seamlessly scales both upwards and downwards, but that’s something good modelers should be able to do. I have no doubt that much of the excitement in future T20 games will revolve around the par score. You must have seen the recent appearance of WASP – WASP [8] is very similar in spirit to the “pressure index” [9] that we had introduced back in 2007. Both WASP and the pressure index are derivatives of the par score, and I sure we can come up with many other such derivatives to spin even more excitement.

[1] McHale, I., and Asif, M., “A modified Duckworth-Lewis method for adjusting targets in interrupted limited overs cricket”, European Journal of Operational Research, 225(2), pp. 353-362, 2013.

[2] Duckworth, F.C., Lewis, A.J., “A successful operational research intervention in one-day cricket”. The Journal of the Operational Research Society 55, 749–759. 2004.

[3] Karandikar, R. and Bhogle, S.,”The anomalous contraction of the Duckworth-Lewis method” http://www.espncricinfo.com/magazine/content/story/459431.html. Retrieved Nov 2, 2014.

[4] Jayadevan, V., “A new method for the computation of target scores in interrupted, limited-over cricket matches”, Current Science, 83, pp.577-586, 2002.

[5] Bhogle, S., “Exit D/L, enter VJD?” https://cricket.yahoo.com/blogs/yahoo-cricket-columns/exit-d-l-enter-vjd-1.html. Retrieved Nov 2, 2014

[6] Natarajan, Ganesh, “Bio-allometry inspired resource estimation in Twenty20 cricket”. Preprint submitted to Proc. of IMechE, Part P: J. Sports Engg. and Tech. September 3, 2014

[7] Perera, H.,P., and Swartz, T.,B.,”Resource estimation in T20 cricket”, IMA Journal of Management Mathematics, 24, pp. 337-347, 2013.

[8] Hogan, Seamus, “Cricket and the wasp: Shameless self-promotion (wonkish). http://offsettingbehaviour.blogspot.co.nz/2012/11/cricket-and-wasp-shameless-self.html. Retrieved Nov 2, 2014.

[9] Bhogle, S., “One day cricket, and the concept of pressure”. http://www.rediff.com/wc2007/2007/mar/08pressure.htm. Retrieved Nov 2, 2014

I thank V Jayadevan for many interesting discussions on the D/L rationale, and and Ganesh Natarajan for pointing me to that McHale-Asif paper.

4 thoughts on “The D/L story so far

  1. Just before the 2015 WC started we read in the papers that D/L is now D/L/S. The S is ‘Stern’.

    D/L/S is very similar in spirit to the McHale-Asif method discussed here; the only difference is McHale-Asif use a truncated Cauchy distribution … while Stern recommends a beta distribution.

    Both these shapes accommodate T20 run scoring patterns better than D/L’s exponential decay functions. D/L never acknowledged this publicly, but with D/L/S they are finally pleading guilty.

  2. If runs are reduced…which seems like the crux of every model..
    Then why not reduce the number of wickets as well..
    For example..
    If in 6 over batting teams needs 60 runs.
    Then bowling team should also need only 3 wickets…

    • It is not so easy to manage the ‘reduce wickets’ process, even assuming it makes sense. Supposing the calculation reveals 2.5 wickets available, would it then be 2 wickets or 3? One extra wicket would make such a big difference!

      Also runs are not always reduced; in fact the target can increase. If the first team scores 200/2 in 40 overs when rain ends innings, the target for the second team in 40 overs could be around 245.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s