Nothing to see here…yet

As you’ve probably gathered by now, things haven’t gone entirely to plan. I did some system development work in July but not as much as I hoped I could get through and the results weren’t as promising as I had hoped for either. That said, I had at least one system that was a potential candidate for the blog. Things then got complicated by the fact I was admitted to hospital for major abdominal surgery at the very end of July, several weeks earlier than I had been expecting. A spell in hospital followed by several weeks recovery at home and it was September before I even realised. But my recovery wasn’t going as well as I had hoped and unfortunately running a football system on this blog was one of the furthest things from my mind at that point. I now think too much of the season has elapsed, coupled with the fact that I am taking another short break, to make it really worth trying to tidy up any system loose ends and start posting qualifiers here. So it’s unlikely you’ll see anything from me this season. Sorry.

That said, I hope to start work on some new football related projects in the coming weeks and it is possible that something useful will come out of those so perhaps I will have something to share with you in a few months. Think of this in terms of transfer windows if you like. The summer window may have shut without me unveiling any new players (or having any old players still on the team if you continue that analogy as I have scrapped all last season’s systems) but the chairman has promised me funds for January signings if I can find the right players. We’ll just have to wait and see what happens in the coming months.

Summer break

It’s been a few weeks now since I completed the reviews of last season including a full breakdown of each system and a comprehensive look at the system development process. I wanted to let the dust settle over that before I updated further. But I have decided it is now time for a summer break. I don’t think it is unreasonable for a football blog to take a bit of time off in the summer so I am going to do just that. It is the close season after all, even if Fulham do start their Europa League qualification campaign this even – it’s still June!

I hope the blog will be back again in August with some new ideas to trial. It’s not exactly clear what offerings I will return with but I have learned a lot from the 2010/11 season and my eyes are now open to a few system development tricks and things to avoid. All I can say at this stage is that while the blog may be taking a break that doesn’t mean I will be putting my feet up. I have been beavering away over the past week or so to get things in order for some serious system development work during July. It’s still early days but there are signs of some promising systems out there.

As I said, I’m not sure exactly what I will be back with in August but I hope to have something to share with you. The plan at the minute is to return with fewer systems than previously. I think seven was pushing it really so this time I will cherry-pick from whatever I manage to develop over the next few weeks and use the strongest and most reliable here. That’s the plan as it stands anyway but let’s see how things go over the coming weeks.

Thanks for your support to date, hope to see you back here in August.

Portfolio Systems Development Review

In order to conduct a meaningful review of how the various portfolio systems were developed I have to try to get back into the mindset I had last summer when the bulk of the work was done. That’s not proving that easy, to be honest. I can’t remember my exact train of thought at certain points so can’t quite see why I did certain things and went down certain routes. So some of the details in this review may be sketchy as it seems I didn’t fully document all of what I was doing. That’s the first lesson learned from this review – make extensive notes as you never know when you might need them. I have found some notes and all my spreadsheets so it’s not too bad really but there are still a few assumptions in this review. Regardless, I am confident that this review will be able to highlight a number of reasons why the portfolio systems underperformed this season and should also highlight a few valuable system development lessons too.

System Development Process
I wasn’t really sure how best to present the process I used to develop these systems. I was torn between outlining the process in full and then discussing it or analysing each step of the process as I come to it. Hopefully you will get an idea of how I did what I did.

Before any system development could take place I needed some data to work with. I opted to use data from the four English divisions (Premiership, Championship, League One and Two) for a period of 10 seasons starting with the 2000/01 season. That’s a total of 20360 matches. I took the data from and cleaned it up a bit (for example I made all the team names consistent throughout the data) before use. I used a Microsoft Excel spreadsheet to store, manipulate and analyse the data. I was then ready to get started.

Step 1. Generate home/home, away/away, home/all and away/all six-game form and streaks data.

The data I had obtained from football-data was only a starting point. I used that data along with a few VBA macros to generate a huge volume of additional data for each match. These portfolio systems made use of the form and streaks data but a significant amount of other data was also generated and I hope to make good use of it in the future.

The form data was generated by examining results from a team’s last six games. It comprises the form string e.g. WWWDWL, with most recent result on the right and W equalling a win, D a draw and L a loss; the number of goals for and against; the number of points earned with three points awarded for a win and one for a draw; plus the number of games that have gone under/over 2.5 goals.

The streaks data counts the number of games a team has gone without a certain result or event happening, including: number of games without a win; games without a draw; games since last defeat; games without scoring a goal; number of games without conceding a goal; games since a match that ended with under 2.5 goals and number of games since a result that was over 2.5 goals.

For each data category (form, streaks etc) I generated four sets of data: home/home, away/away, home/all and away/all. Home/home data relates to matches the home team has played at their home ground, away/away is data from away matches played by the away team whereas home/all and away/all refer to games played by the home and away side respectively regardless of whether they were played at home or away. This is perhaps made clearer with an example.

Take the final match of the 2009/10 Premiership season – Wolves v Sunderland – as our example. For this fixture, as with all fixtures, we want to generate data across the various data categories based on previous results. This data will be divided into four sets, each using different results. The home/home data will use data from the home team’s (Wolves’) recent games at Molineux. Thus the six-game home/home form data will be based on Wolves matches at home to Tottenham, Chelsea, Man United, Everton, Stoke and Blackburn. And you can check the fixture lists for that season to confirm they were Wolves’ opponents prior to the season closer against Sunderland. The six-game away/away form data will be based on Sunderland’s six most recent away games prior to their visit to Wolves. That means their trips to Portsmouth, Arsenal, Aston Villa, Liverpool, West Ham and Hull. The home/all form data would be based on the last six games Wolves played regardless of where they took place, i.e. Wolves v Everton, Arsenal v Wolves, Wolves v Stoke, Fulham v Wolves, Wolves v Blackburn and Portsmouth v Wolves. Similarly the away/all form data uses Sunderland’s last six games, which were Liverpool v Sunderland, Sunderland v Tottenham, West Ham v Sunderland, Sunderland v Burnley, Hull v Sunderland and Sunderland v Man United.

Step 2. Analyse form and streak data looking at more than 100 form and 50 streak potential system ideas for each of the four sets of data, with the summary performance for home wins, draws, away wins, under 2.5 goals and over 2.5 goals recorded.

Having generated all the necessary date the next step is to start analysing it. This was done in a methodical manner, working through the four form-based data sets (home/home, away/away etc.) before tackling the streak-based data. For each of the four form-based data sets more than 100 system ideas were identified and analysed. These included teams drawing their last match, earning at least X points in their last six games, conceding no more than X goals in their previous six games, the goal difference of their last six games exceeding X, the total number of goals scored and conceded in the last six games being under X and so on. For each potential system idea a theoretical 1pt bet was placed on the home win, draw, away win, under 2.5 goals and over 2.5 goals whenever a match met the relevant selection criteria and the number of qualifiers, strike rate and ROI for each were noted. I found the files containing this information so I did at least document some of what I was doing – phew!

A similar process was then used for the streak data, with approximately 50 potential systems being analysed for each of the four data sets. These included teams being unbeaten in their last X matches, teams having gone X games without a draw and so on. As with the form data the potential returns from theoretical 1pt bets were recorded so one could soon see, for example, whether it was worth backing the home win when the home team were unbeaten in their last five games at home or whether the away team might be trading at value odds.

With over 100 system ideas for the four form-based data sets and a further 50 for each of the streak-based data sets and the returns from home wins, draw, away wins, under 2.5 goals and over 2.5 goals recorded this meant in excess of 3000 potential systems to be analysed.

Step 3. Analyse and optimise above 3000+ system ideas using month, division and odds filters

It was rare for any of the 3000+ system ideas that came out of step 2 to be profitable in the raw form they were in at this stage so each was subjected to an analysis and optimisation process. The spreadsheet I had crafted provided me with a full breakdown of each potential system, splitting the data by season, month, division and odds. The seasonal split simply gave me an idea whether a couple of freak years were providing the vast majority of the profits or the trend generated consistent profits through the years. The month, divisions and odds splits were used to filter the data with a view to improving the profitability

The three filter types were used in combination in order to optimise the trend. After applying each filter I would re-analyse the data, generating a new set of data for each of the various breakdowns so that I could see the effect of each filter in isolation. For example, the initial breakdowns may indicate that the system is particularly profitable in the Premiership and when the home team is odds on. I would apply the Premiership filter and re-examine the breakdowns for the filtered data as the odds brackets that looked profitable before may not stand out as much having filtered the data.

Each potential system was analysed to provide a breakdown of the performance by month, division and odds range. It was rare for any of the systems to be profitable in the raw form they were in at the end of the previous step, certainly not over any significant number of bets anyway. However, a few filters could soon rectify that and this step revealed numerous profitable little trends. By splitting the overall figures down so I could see the number of bets, strike rate and ROI for each month, division and odds range I could start to mine for areas I felt I could exploit for profit.

The available month and divisional filters were obviously the same for all systems – trends could be filtered down into bets during August, September etc as well as by Premiership, Championship, League One and League Two. The odds filters depended on the market in question. For example, for home wins my lowest price bracket was ‘under 1.50’ but there would be no point having  such a bracket for draws which are always 2/1 or greater. A total of eight price brackets were employed for each of the 1×2 markets with the brackets varying in size depending on how much granularity I wanted from the analysis. Odds filters were not employed in the analysis of the under/over 2.5 goals bets due to the fact there is so little variation in the odds.

When filtering these potential system ideas I tried to ask myself why a trend would be profitable in some cases and not others. For example, why should a trend be profitable in some months and not others? It could be that the form needs to settle down meaning a system isn’t necessarily profitable at the start of the season. A trend may only be profitable in the top divisions due to the higher quality of the teams and football played (although I have to admit I don’t really buy that one any more). Is there any reason why short-odds selections do worse than the longshots? Perhaps bookies are shortening so-called banker teams like Manchester United and Chelsea knowing that casual punters will back them whatever odds they are which means the opposition may be available at fantastic odds.

I say I tried to ask questions like those above during the filtering process and as far as I can recall I was reasonably diligent. I’m pretty sure I didn’t note down all the profitable trends I identified but in hindsight I should have been more strict with this idea and questioned the filters applied in many more cases. Some of these filtered trends seem pretty hard to justify now that I look at them again.

During this filtering and optimisation process I tried to ensure that the resultant samples were not too small and that the number of bets remained significant. I was trying to avoid the situation whereby a profitable trend was identified but it only throws up a bet once every blue moon as that doesn’t suit anyone. Such systems have a massive risk of a bet being missed (either I miss posting it or you miss backing it) and inevitably that will be a winner and the next one won’t be along for ages. I was trying to strike a balance between a ridiculously heavy workload and a stupidly small number of bets. I am not sure I succeeded in all cases.

Step 4. Best trends packaged into first draft portfolio systems

At the end of the previous step I had identified a large number of profitable trends. My plan was to package these up into a number of portfolio systems. It was always my intention to develop a number of systems with each having a different focus. I wasn’t sure at the start of this process how many systems I was aiming for, I was waiting to see how many the data threw up really. But by this stage of the process it was obvious that I should have separate systems for each of the bet types in the 1×2 and under/over markets. I also wanted to keep the form-based data and streak-based data separate so I could monitor the effectiveness of them individually. I didn’t have enough in the way of suitable streak-based trends to separate out all options in the 1×2 market so I went for a combined system there and similarly with the under/over 2.5 goals market.

I began to group the various trends into several categories in order to form the first draft of the portfolio systems. All the form-based trends that selected home wins were put into one portfolio system, all the form-based draw trends in another and so on. The number of trends included in each portfolio system at this point varied significantly. The first draft Home Win Form Portfolio System contained 25 different trends while the Under Form Portfolio System comprised 65 unique trends. This meant each portfolio system varied significantly in terms of number of bets, strike rate and profit but that wasn’t my concern at this stage.

As I had this portfolio system approach in mind it perhaps influenced some of the work done in the previous step and meant that some trends made that cut that perhaps shouldn’t have done. Knowing that I was going to bundle together several trends I might have continued to develop some very-specific trends that wouldn’t stand up by themselves knowing they would be hidden away in a portfolio with other systems/trends for support. For example, I worked on some trends that were only profitable at the start of the season knowing they could be packaged with others that were profitable at other points throughout the year. In effect I generated some systems that completely changed in nature depending on the month.

The portfolio system approach is one I will use in the future as I think there is great value in it but one needs to be quite careful how it is applied. I think the key is to first develop a number of individual systems that are all profitable in their own right, but also systems that one would be happy to follow on their own. It’s that last condition I failed to meet in some cases. The advantage of portfolio systems is that they can help smooth out the various ups and downs. But if any of the constituent parts fail it can be difficult to spot that as early as one would if following each system separately.

Step 5. Portfolio systems optimised by adding/removing individual trends

I said above that the first draft portfolio systems varied greatly in terms of workload and profitability. This step was to address that issue. I went into it with a vague idea of the profile I was aiming for. I wanted to avoid systems that generated little action so wanted each of the various portfolio systems I was building to provide a good number of bets each season. And I obviously wanted a decent rate of return. On that front a return on investment of 10% was my absolute basement figure and I was aiming for closer to 15% or even 20%. What’s more I wanted a steady accumulation of profits rather than the bulk of them coming in a short period. With this image of a ‘perfect’ system in mind I could start to optimise each of the portfolios.

The first step in the optimisation process was to obtain statistical breakdowns (by season, month, division and odds range as in Step 3) for each of the individual trends in the portfolio. This was a simple process that naturally dropped out of the previous steps. Once I had all this information I used my spreadsheets to toggle each individual trend on and off to see what impact it had on the overall figures as well as what changes it made to the profit accumulation graph. I was looking for a combination of a number of trends to give me the desired number of bets per season, the right sort of level of return and the steady profit accumulation I was after.

At this stage it didn’t really matter which of the individual trends I was including and which were omitted. I wasn’t really paying any attention to how the individual trend rules would combine to form the parent portfolio system. Take the Over Form Portfolio System as an example. In August the bets were being selected by trends based on things like: goal difference in all divisions except the Championship; away teams in the Premiership having drawn their last two matches and the number of goals scored by the away side. In hindsight it seems like an odd set of rules doesn’t it? When combined to form the parent system they don’t seem to make any sense together or complement one another. My focus at this stage was simply getting the portfolio system to meet the desired profile. Looking back I can see I was too obsessed with developing that ‘perfect’ system that delivered smooth, consistent profits that I took my eye off the ball when it came to the actual make-up of the system.

Step 6. Live trial on blog

The final step in the process was to conduct a live trial of all portfolio systems. Backtesting is obviously a good idea in order to get some idea of the likely performance but live testing is an essential part of any strategy. How else can you know if you idea will work in the real world or not? So I set this blog up and put together the spreadsheets I needed to make sorting out the qualifiers easier and started posting.

We’ve already seen that for the most part the live trial didn’t work in the slightest. The systems failed catastrophically and a couple of them recorded heavy losses. I hope none of you got your fingers burned too much. I did say right from the outset that this was a live trial and the first time these systems had been subjected to real data rather than backtesting. I followed the systems to small stakes up until January so I have literally paid the price for a flawed development process.

Having reviewed the system development process I think it’s fair to say there are a number of flaws in there, many of which I have already identified. I may not have spotted every weakness in my work but I am sure that addressing the big issues I have picked out will greatly improve the quality of future work. I certainly don’t feel that the whole process was flawed though. There are some steps that I would carry through to future developments lock, stock and barrel. But there are also other steps I would never use again in their current form. The key is to recognise which is which and learn from this experience.

I think the problems started during the analysis and optimisation of the individual system trends. Some of the trends were too specific and despite writing warnings about the dangers of back-fitting throughout what documentation exists for this work I still fell into that trap somewhat, e.g. one trend I used was home team drew last match, month is November and home win odds (for next match) are greater than evens. OK, if team drew last match then perhaps that will affect their win odds next time out but why should this trend work in November but not October, December or any other point of the season?

Shortly before I started this system development process I had been reading about the effects of the weather on football results. The articles concerned the average number of goals per game during sunny and rainy periods. I found it interesting stuff and began to do a little work of my own in this area. However, I think I got carried away somewhat and made far too many assumptions when it came to developing the portfolio systems. Take the Under Form Portfolio and Over Form Portfolio as examples. The former has very few bets in the first few months of the season and really ramps up the activity from January onwards while the latter is more or less the opposite. Poor weather leads to fewer bets on average but this idea has been taken to the extreme somewhat here. I have made general assumptions about the weather and applied them to the development process. A certain under 2.5 goals trend is profitable in January and February because the weather is lousy, right? Perhaps, but is the weather in December not also crappy? Why isn’t the trend profitable then too?

I stated earlier that I tried to question why trends should be profitable only when certain filters are applied. As you have just seen my attempts at justifying some of those filters now look quite flaky and were I (or someone else) to go back over these trends I doubt many of them would stand up to scrutiny. Filtering the trends by month is perhaps the greatness weakness of this work. To some extent this was justified on the basis of weather as I mentioned above but that’s not a solid enough reason for such filtering. The divisional filtering is largely based on the ability of the average player in that division and the class of football played but I no longer feel the difference is that significant. However, there is also the fact that bookies see a much greater turnover on Premierships matches than lower leagues so the odds compilers look to get the Premiership markets spot on leaving less time for other divisions. That may partly explain why some trends are profitable in certain divisions and not others. That said I would expect the filters to take the form of ‘Premiership only’, ‘lower divisions only’ and not ‘all divisions except the Championship’ as was the case for some of my trends.

I was blind to some of the weaknesses of the filtering process as the individual trends would be packaged into portfolio systems. I think that at the time I had the view that the weaknesses of the individual trends would be glossed over because other trends would be generating profits. The portfolio system idea seemed like a magic bullet with few drawbacks. Obviously that’s not the case and I have already mentioned that I now feel the individual trends forming any portfolio system must also be capable of standing alone. I identified a large number of profitable little trends but many of them were far too specific to stand alone so the danger of backfitting becomes very real. I found it hard it throw most of these trends away. I should have used the gardening principle of thinning down to only the strongest but I didn’t.

I was also striving too hard to develop a system that fitted my view of the perfect system. I wanted the number of bets, strike rate and ROI all to fall into a certain range of values and that drove the development far more than it should have done. More than that I was seeking a smooth profit curve and tried, where possible, to balance the number of bets per month too. There is an air of the tail wagging the dog about this. At the time I had it at the back of my mind that I shouldn’t be including/excluding trends just to balance the profits but I still did it. This is gambling and profits generally don’t come along at a steady rate. There are peaks and troughs, winning and losing runs, good and bad spells. They are part of the game and you can’t just smooth them out, not here in the real world.

Where do we go from here? Obviously the systems can’t run again as they are, not knowing what I do about the development process. How can anyone expect the product of a flawed development process to work? That said, appearances can be deceiving. The Over Form Portfolio looks like it went exactly to plan in League One this season but you have to ask yourself why it worked in that division and not others. I’m pretty sure this season was a fluke and were you to follow the Over Form Portfolio in League One next season there is certainly no guarantee it will perform to anything like the same standard. You have to be careful how much you read into results breakdowns. Anyway, I will retire this current suite of systems and go back to the drawing board. Whether I will have any offerings for next season remains to be seen. I hope to have something else to trial but I’m not sure whether I will have the time to develop something. One thing is for sure though, I have learned several lessons this season so hopefully I can reap the benefits over the coming seasons.

UO Streaks Portfolio System Performance Review

The UO Streaks Portfolio System specialises in bets in the under/over 2.5 goals market using streak-based data to select bets. An average season will see in the region of 350 bets placed for a return of approximately 60pts at a return on investment close to 18%.

Performance Summary
A summary of this season’s performance is shown on the left-hand side of Table 1 (below) with the aggregate figures from the previous five seasons, obtained during backtesting, are shown on the right-hand side. Allowing a little leeway for the periods during which I was unable to post qualifiers to the blog I think it is clear that this season saw a roughly average number of bets. Previous seasons have seen between 322 and 396 bets with an average of 351 and while this season’s figure was at the bottom end of that range I think it is in line with expectations. The winner count is down on an average season though, as shown by this season’s strike rate of 52.27% compared to a five-year average of 63.42%. The previous lowest SR for a season was 59.61% back in 2006/07 so we have fallen a long way short of that mark, around 23 winners short in fact.

Table 1: Summary Stats
2010/11 2005/06 – 2009/10
Bets Winners SR   Bets Winners SR  
308 161 52.27%   1755 1113 63.42%  
Ave Odds Max Odds Ave Odds Max Odds
Profit ROI Profit ROI Profit ROI Profit ROI
-0.81 -0.26% 10.82 3.51% 315.08 17.95%

It should go without saying that winners drive the profit figures and without those winners we were never likely to reach the heights we expected to hit based on the previous five years. In the end the losses were minimal to average odds, less than 1pt in fact, but things would have looked quite different had we not missed out on over 20 winners. Actually it’s worse than it seems isn’t it? Our winners tally didn’t just fall short by over 20, those bets were chalked up as losing bets which obviously detract from the bottom line. It’s not just that we didn’t get as many winners as expected, we also had more losers to compound the problem. Things look better to maximum odds, obviously, with a profit of 10.82pts but this is still a long way short of the 63pts profit an average season would net to average odds. The worst previous season saw a return of 34.53pts so this season ended a long way off that let alone the average seasonal profit.

Data from the previous five seasons was used in order to backtest this system and generate an idea of what sort of winning and losing runs we should be prepared for. The longest winning run during that testing phase was 13 consecutive bets for a profit of 13.02pts while the longest losing run covered six bets for a 7pt loss. This season the magic number seems to be seven as both the longest winning and longest losing run extended to seven bets, the former winning 6.21pts profit while the latter saw a loss of 8pts in stakes. We eased the longest losing run up a notch this season and failed to hit the string of winners we had seen previously but this is probably nothing too out of the ordinary. If you lack winners then these sort of things have to be expected I suppose.

This season saw a total outlay of 308pts across 270 different matches for an average stake of 1.14pts per match. This is slightly lower than the five-year average of 1.18pts but nothing of concern really. The maximum stake employed this season was 3pts, one point lower than the previous maximum but more important is the frequency with which multi-point stakes were called for. Stakes of more than 1pt were used in 34 games this season, 12.59% of the total. In previous seasons this figure was 15.47% so the staking this season was slightly down on average, which would explain the lower average stake. It could be that these figures are not significantly different to the averages but I don’t think it is worth worrying too much about going down that route as I don’t feel it will yield anything much that will help guide the future direction of this system.

A betting bank of 30pts was the recommended figure for those wishing to follow this system and I would like to just evaluate whether that was a sensible bank size. The advised figure came from an assessment of the previous losing runs combined with an allowance for the concurrent nature of most football fixtures which obviously increases the exposure and hence a larger bank is required. On the face of it the 30pt bank was just sufficient as the losses peaked at 29.17pts midway through February, just below the point that would see the bank bust. However, that doesn’t tell the whole story as losses of 29.17pts leave less than the required stake for the next bet and also doesn’t leave anything for future seasons. Clearly that’s not right. We could factor in the maximum daily outlay to give a revised bank figure. This season the maximum exposure was 27pts so assuming that we should always have such an amount available the bank effectively busts when we lose 3pts, and that occurred at the start of October. It looks like the advised bank was far too small for a proper gambling strategy to apply.

The system showed a loss this season but was that bad luck or did we not enjoy any sort of an edge over the bookies? At average odds of 1.91 and a strike rate of 52.27% we are very close to the breakeven strike rate of 52.36% as one may have suspected from the fact that we only made a small loss this year. However, things look a little more interesting when we break down the figures by bet type. The under 2.5 goals bets made 4.42pts profit at average odds of 1.89 with a strike rate of 54.30%, a touch above the breakeven SR of 52.91%. It is the over 2.5 goals bets that let us down with a loss of 5.23pts at average odds of 1.94 with a strike rate of 49.18%, a few points off the breakeven figure of 51.55%. It looks like the under bets may enjoy a very slim edge but that is countered by the negative edge on the overs.

Detailed Analysis
A monthly breakdown of this season’s performance is shown in the left-hand side of Table 2 while the right-hand side provides the aggregate figures from the previous five seasons. If we divide the right-hand figures by five we can use the results as an estimate of what an average season should look like. As you can see the system is quite stop-start in nature with the number of bets per month varying quite a lot. Let’s take a quick look though at whether we had the expected number of bets at various points.

We only had roughly two-thirds of the expected number of bets in August but the SR and ROI were pretty much right on the money, getting the season off to a solid start. September is a quiet month and small samples are notoriously unreliable so the fact we made a slight loss rather than booking a small win is of no real concern due to the low number of bets. October was busier than average (52 bets compared to an average of around 36) but lacked the winners required to turn a profit. The SR was a long way off the five-year average as we missed out on something like nine more winners in that month alone. Obviously had we hit those winners we’d have made a profit rather than losing nearly 9pts. Those nine missing winners would have put the profit and ROI right back in line with expected figures. November and December are rest months with no selections.

Table 2: Monthly Breakdown (Ave Odds)
2010/11 2005/06 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
August 30 17 56.67% 4.32
August 229 133 58.08% 34.04 14.86%
September 4 2 50.00% -0.29 -7.17% September 40 24 60.00% 6.00 15.00%
October 52 22 42.31% -8.96 -17.23% October 179 107 59.78% 31.52 17.61%
November 0 0 0.00% 0.00 0.00% November 0 0 0.00% 0.00 0.00%
December 0 0 0.00% 0.00 0.00% December 0 0 0.00% 0.00 0.00%
January 47 14 29.79% -21.57 -45.89% January 240 151 62.92% 36.71 15.30%
February 107 67 62.62% 20.52 19.17% February 754 495 65.65% 138.27 18.34%
March 17 9 52.94% -0.40 -2.32% March 83 56 67.47% 17.58 21.18%
April 44 25 56.82% 3.30 7.50% April 179 113 63.13% 38.80 21.68%
May 7 5 71.43% 2.27 32.39% May 51 34 66.67% 12.16 23.84%

January was a short month due to my spell in hospital but despite that we still racked up an average number of bets. Had I been available for the full month I wonder how many more bets would have been recorded in January and whether that would have pushed the figure outside the realm of expectations. There was a severe lack of winners amongst the bets that were recorded in January, however. The strike rate was less than half the expected value as the month fell approximately 15 winners short of an average season. That’s a hell of a difference. At average odds of 1.91 those 15 missing winners mean a profit-to-loss swing of over 28pts, coincidentally the difference between the actual and expected returns. A poor January was followed by an excellent February which all but wiped out the previous month’s losses despite the fact there were far fewer bets than average. The strike rate and return on investment for February were in the right ball park though and had there been more qualifying bets we could reasonably have expected more profits. The closing months of the season were nothing too much to write home about although they did return a small profit despite there being fewer winners than expected during March and April.

Table 3 uses the same format as above to illustrate the performance broken down by division so again we can divide the right-hand figures by five to provide an estimate of an average season. Doing that shows there were fewer Premiership bets than in an average season, for reasons that aren’t entirely clear. It seems unlikely that the 20-odd bets would all have come in the periods I was unable to post to the blog given there weren’t that many more top-flight bets throughout the rest of the season. The number of League One bets is similarly down on the five-year average but this time there were many more bets in total so being 20 down on the averages isn’t as striking. The other divisions were close enough to the averages so as not to raise suspicions.

Table 3: Divisional Breakdown (Ave Odds)
2010/11 2005/06 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
Premiership 29 12 41.38% -5.95 -20.50% Premiership 254 156 61.42% 44.47 17.51%
Championship 25 14 56.00% 2.10 8.40% Championship 151 91 60.26% 28.78 19.06%
League One 143 76 53.15% 4.69 3.28% League One 851 525 64.42% 147.18 18.06%
League Two 111 59 53.15% -1.65 -1.49% League Two 535 341 63.74% 94.65 17.69%

The recorded number of winning bets is lower than average in all divisions but strikingly so in the Premiership. It’s no surprise then that these returns from the top-flight were the worst of all divisions. In terms of performance compared to the five-year averages though it is League One and League Two that exhibit the greatest difference. An average season in League One should see a return of nearly 30pts whereas this season the profit was les than 5pts, although it was at least a profit. The returns from League Two were slightly negative and more than 20pts below the average figure from previous years. The Premiership returns were nearly 15pts shy of the average while the Championship was only short by a few points but one must allow for the relatively small samples in the top two divisions.

The long and short of it though is that each division fell short of the average strike rate and because of that none were able to display the level of returns that previous seasons had and the ROIs were all a long way off the five-year averages.

A small loss at the end of the season doesn’t seem so bad in the light of some of the other systems I have reviewed of late but that fact of the matter is this was still a loss and not the juicy profit it should have been according to the five-year figures obtained during testing. The overall strike rate was lower than expected, not helped in the slightest by months such as January, but the lack of winners was spread throughout the season really. Had the strike rate been closer to the expected value then I have no doubt that the profits would also have been far more in line with expectations. But it wasn’t to be. Why didn’t this system live up to its billing? I hope the system development review will answer that one.

HDA Streaks Portfolio System Performance Review

As the name suggests, the HDA Streaks Portfolio System uses streak data (number of games since a win, length of unbeaten run etc) to predict winners in the 1X2 (home/draw/away) market. It is a relatively busy system with a typical season seeing close to 600 bets placed for a profit of around 120pts at a return on investment of approximately 20%.

Performance Summary
Table 1 below summarises the performance of the HDA Streaks Portfolio System over the last season (on the left) as well as providing figures from the previous 10 seasons (on the right) for comparison purposes. The 2010/11 season was a bit quieter than normal with just (!) 470 bets placed compared to a 10-year average of 589. However,  allowing for the periods I was unable to post selections, especially during a traditionally busy January as we shall see later, and also taking into account the natural variation in the number of bets each year (which has varied from 566 to 607) things are probably just about in line with expectation and any deviation is probably not worth worrying about too much at this stage. What is worth worrying about though is the significantly lower than average number of winners which obviously brings the associated strike rate down. The SR was over 10% lower than the average mark and a long way short of the previous lowest value of 33.72%. Those figures are equivalent to 50 fewer winners than average and 32 winners short of the previous lowest SR. Things are not off to a good start here.

Table 1: Summary Stats
2010/11 2000/01 – 2009/10
Bets Winners SR   Bets Winners SR  
470 126 26.81%   5887 2216 37.64%  
Ave Odds Max Odds Ave Odds Max Odds
Profit ROI Profit ROI Profit ROI Profit ROI
-65.69 -13.98% -40.67 -8.65% 1228.16 20.86%

An average season would result in profits in the region of 120pts, a far cry away from the loss of 65.69pts recorded this season. Admittedly that 120pts is an average and the actual returns have ranged from 33.17pts to 184.35pts but that still means we are nigh on 100pts off the previous worst season. Shocking. As we have seen in previous reviews, settling all bets to maximum odds makes quite a significant difference to the figures, adding 25pts to the returns and increasing the ROI by over 5% which just goes to show how lousy the average odds must have been in some cases. But the 10-year stats were compiled using average odds (albeit calculated slightly differently) and they show a very healthy profit. Even at maximum odds 2010/11 was a long way off being profitable.

Backtesting over the previous 10 seasons gave some idea of the winning and losing runs one should expect from this system. The longest runs were seven consecutive winners for a profit of 26.15pts and a massive 24 consecutive losers costing 26pts. While I don’t expect such runs to occur every season they do at least provide some context when analysing the runs experienced this season. It transpires the longest winning run this season was four bets for a profit of 17.01pts while the longest losing run was 16 bets costing 19pts in stakes. Such runs are comfortably within the limits of previous seasons so are nothing to worry about it seems.

Let us now take a look at the staking of this system. In 2010/11 we staked 470pts across 415 matches for an average stake of 1.13pts. Historically this average has come out at 1.15pts so that’s OK. Before this season the previous largest stake was 5pts and stakes in excess of 1pt had been called for on 12.57% of occasions. This season staking peaked at 4pts and 11.57% of matches called for multi-point stakes. On that basis it seems as though this season’s staking was very much in line with previous years.

Prior to the season starting I evaluated the data from previous years and recommended a betting bank of 55pts for this system. Was that a sensible figure? It’s easy to say no because the losses incurred amounted to more than the advised bank figure but let’s look at the numbers in a bit more detail. Previously the longest losing run had set us back 26pts so allowing for that plus including a bit of breathing space to account for the fact that often several bets have to be placed at the same time increasing the overall exposure I opted for a 55pt bank. The bank came very close to busting in mid-February before finally tipping over the edge at the end of March. It never really recovered from that point. If you factor in the maximum daily outlay/exposure, which this season was 28pts, the bank effectively bust on New Year’s Day when losses exceeded 27pts as at that point the accumulated losses and maximum outlay exceed the advised bank figure.

The advised bank was based on previous drawdowns but given the size of the losses incurred this season what sort of bank should have been advised? For that we should think in terms of strike rates and average odds to evaluate whether the system holds any long-term edge. Our SR of 26.81% is poor, especially when stacked up against the average SRs for the markets we were betting in. My record show teams win 44.94% of home games, 27.25% of matches end in a draw and the away side is victorious the remaining 27.81% of the time. Even if all our bets this season were on the trickier draws and away wins we fell short of the mark one would achieve through sheer luck. What about the individual strike rates for each bet type? Just 84pts out of the total 470pts staked were on home wins, equivalent to 17.87%. Of those 84 bets only 26 of them were winners so our home win SR is way down at 30.95%, nearly 14% below the average. We staked 165pts on draws (35.11% of the total stakes) with a strike rate of 25.45%, a few percentage points down on average but not too bad. The remaining 221pts were placed on away wins, equal to 47.02% of all bets on this system but the strike rate was pretty poor at just 26.24%. So while the draw and away strike rates were much closer to the average than the home win SR it is the former that have done the damage really as between them away wins and draws account for over 80% of the bets placed.

What about the average odds on those bets? Overall the average odds were 3.47 which would require a strike rate of 28.81% in order to break even, two percentage points ahead of what this system achieved last season. Where did the problem lie? The average home odds were 3.04, for draws the average odds were 3.53 and it was 3.59 for away wins. At those average odds the breakeven strike rates are 32.89% for home wins, 28.33% for draws and 27.86% for draws. Each of the strike rates recorded this season were a couple of points away from those figures so it seems there was disappointment all round. The system as a whole had no edge and nor did any of the individual bet types. This was a losing proposition all round and as such no bank would have been large enough in the long run.

Detailed Analysis
Table 2 provides a month-by-month comparison of the performance last season with that from the previous 10 seasons. I want to start with the distribution of bets throughout the year to see how that compares. You can divide the figures on the right-hand side by 10 to get an average against which we can compare the figures on the left. If we do that we see that several months were quieter than one would normally expect, including August, September, October, December and March. January was the busiest month of all but was actually a short month due to my unavailability so I wonder how many bets there would have been had I been posting qualifiers for the full period. If we apply the same ‘divide by 10’ logic to the winners column also we can identify the points in the season where we fell short of the expected number of winners. Oh, it’s pretty much every month. Alright, August, October and May were in the right ball park given the lower than average number of bets but the rest of the season was very disappointing.

Table 2: Monthly Breakdown (Ave Odds)
2010/11 2000/01 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
August 55 22 40.00% 18.20
August 780 302 38.72% 187.43 24.03%
September 71 14 19.72% -21.49 -30.26% September 922 319 34.60% 151.48 16.43%
October 17 6 35.29% -0.30 -1.76% October 289 106 36.68% 12.53 4.33%
November 53 12 22.64% -16.11 -30.39% November 584 263 45.03% 165.29 28.30%
December 17 5 29.41% -0.81 -4.79% December 438 173 39.50% 94.32 21.53%
January 105 27 25.71% -13.59 -12.94% January 1028 378 36.77% 250.83 24.40%
February 53 14 26.42% -10.56 -19.92% February 579 215 37.13% 120.59 20.83%
March 33 7 21.21% -12.14 -36.77% March 505 171 33.86% 106.69 21.13%
April 48 13 27.08% -10.30 -21.45% April 544 211 38.79% 98.65 18.13%
May 18 6 33.33% 1.39 7.74% May 218 78 35.78% 40.34 18.51%

August got proceedings off to a solid start with approximately the right number of winners and a profit at a little over the average ROI. There was no standout single bet in those results either (unlike with the Away Win Form Portfolio System backing Wigan’s long-odds win at Tottenham), it was just steady accumulation. A huge reversal of fortune in September though as the SR dropped massively and all the profits were lost plus a bit more for good measure. A quiet October was nothing to write home about but November was poor with the SR around half of what one would expect which in turn drives double-figure losses. December was a repeat of October really before we hit a run of four really poor months. Of the 10 months in which we had a bet only one of them came anywhere near the desired performance – August. Several months were over 30pts away from the expected profit figures, including September, November and January. A strong start to the season but things soon tailed off and refused to recover.

Things don’t get any better when we look at the performance by division either, as Table 3 shows. Red figures right down the profit and ROI columns in the lest-hand side of that table tell the tale of the season. Losses in each of the four divisions and not trivial losses either with double-figure negative ROIs in all divisions except the Championship which wasn’t all that far off the mark. Compare those figures to those on the far right of the table, double-digit ROIs, all positive and mostly in excess of 20% too. This season’s performance was a long way off what is expected from this system.

Table 3: Divisional Breakdown (Ave Odds)
2010/11 2000/01 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
Premiership 112 31 27.68% -11.33 -10.11% Premiership 1538 595 38.69% 336.06 21.85%
Championship 128 38 29.69% -11.88 -9.28% Championship 1476 570 38.62% 311.50 21.10%
League One 149 37 24.83% -20.76 -13.94% League One 1706 622 36.46% 384.33 22.53%
League Two 81 20 24.69% -21.72 -26.82% League Two 1167 429 36.76% 196.27 16.82%

Earlier we compared this season’s figures to the 10-year averages for the monthly breakdown, now let’s do the same for the divisional split. Doing so we see all divisions were quieter than normal with fewer bets than one would expect throughout the divisions. However short of bets we may have been the lack of winners was even more apparent. Each of the divisional strike rates were around 10% lower than normal which is always going to make turning any sort of profit tricky. As we have already seen the losses came in all divisions and none of them could be said to have done anything than disappointingly underperformed.

This was one of those systems where if it could go wrong it has done. The total figures were very disappointing and yet again we suffered from a severe lack of winners. The strike rate for all bet types was several percentage points down on the break even figure. The season showed some promise right at the start but the tables soon turned and the performance headed downhill at a rate of knots with no sign of recovery. Things don’t look any better when the figures are viewed by division either with all four leagues showing heavy losses. All in all this season was a long way off the expected performance and serious questions must be asked about this system.

Over Form Portfolio System Performance Review

The Over Form Portfolio System uses form-based analysis to select matches that should contain three or more goals, i.e. end over the 2.5 goals line. In an average season a little over 300pts will be staked for a return of just over 50pts at a ROI of approximately 17%.

Performance Summary
A summary of the Over Form Portfolio System’s performance in the 2010/11 season is shown in Table 1 alongside figures obtained from backtesting the system over the previous five seasons. Things were a little busier than average this year as 332 bets were placed compared to an average of 307pts, although previous seasons have seen between 276 and 367 bets. Allowing for the few bets that would probably have come up during the periods I was unavailable and we’d find ourselves close to the top end of that range but still within previous limits. It is something of a recurring theme across these reviews but once more the number of winners is significantly down on previous years. The strike rate came out as slightly over 50% this season, a long way short of the 60.95% average. Had this season hit that average strike rate we’d have found another 35 winners. The previous lowest SR from backtesting is 57.91% which still means we fell 25 winners short of that mark.

Table 1: Summary Stats
2010/11 2005/06 – 2009/10
Bets Winners SR   Bets Winners SR  
332 167 50.30%   1534 935 60.95%  
Ave Odds Max Odds Ave Odds Max Odds
Profit ROI Profit ROI Profit ROI Profit ROI
-16.40 -4.94% -3.79 -1.14% 262.00 17.08%

In an average season the coffers would swell by approximately 52pts, although this figure has ranged from 32.70pts to 80.47pts in the previous five seasons. This season, however, saw a loss of 16.40pts to average odds although when maximum odds are applied that is reduced to a more respectable 3.79pts and improves the overall ROI by 3.80%. Despite that improvement it is still a loss and bearing in mind the system was developed to profit at a rate of approximately 17% of stakes a loss of nearly 5% of all stakes is a long way off what was expected from this one.

In previous seasons the system was able to record a run of 13 consecutive winning bets netting 15.36pts of profit while the worst run saw 8pts lost across seven bets. This season we plumbed new depths with a losing run of 10 bets costing us 11pts in lost stakes whereas we could only string together seven winners in a row at most although that run was good for 9.38pts. Shorter winning runs combined with longer losing runs is never a particularly good sign is it?

In the season just finished we staked a total of 332 points on 283 different matches for an average stake of 1.17pts. The maximum stake was 3pts and stakes in excess of a single point were employed in 45 matches, 15.90% of the total. In previous seasons the average stake was 1.18pts, the maximum stake was 4pts and multi-point stakes were called for in 15.46% of cases so it seems the staking for the Over Form Portfolio System was normal for the 2010/11 season.

Using the previous largest drawdown of 8pts (as we saw above) plus factoring in the concurrent nature of the bets meaning that stakes need to be placed on all a day’s bets at once increasing the exposure, I advised a bank of 30pts for this system. Was that figure adequate now we have a full season of real results under our belt? The longest losing run this season cost 11pts which still left a good chunk of the bank intact. At one stage in October the returns reached a low of -17.25pts which means under half the bank was still available but situations like that are why a betting bank well in excess of previous lows is advised. The maximum daily exposure this season was 20pts so one could view the bank as being in danger as soon as losses hit 10pts. This happened for the first time on 2nd October although a month later the bank had recovered somewhat and the 10pt barrier wasn’t breached again until mid April. All in all then it looks like a bank of 30pts was reasonable.

Stating that a bank of 30pts was reasonable is based on a fairly significant assumption, namely that the system has some sort of an edge. If the strike rate and average odds are such that there is no edge then no bank would be big enough in the long run. We’ve already seen that last season the SR was 50.30% and that this was down on previous years but what does that really mean? Table 2 shows the average number of goals per game along with the percentage split of games going under/over 2.5 goals. Notice there is a year-on-year increase in the average number of goals per game and also that there is a shift towards more games going over the 2.5 goals line to the extent that in the 2010/11 season more games went over than under.

Table 2: Under/Over 2.5 Goals Data
  Ave Goals Under 2.5 Over 2.5
05/06 2.48 1116 54.81% 920 45.19%
06/07 2.51 1099 53.98% 937 46.02%
07/08 2.53 1089 53.49% 947 46.51%
08/09 2.55 1069 52.50% 967 47.50%
09/10 2.64 1034 50.79% 1002 49.21%
10/11 2.74 984 48.33% 1052 51.67%

In all seasons other than the most recent our strike rate of 50.30% would be better than that one would achieve simply by backing overs in each and every match. In other words we would normally have done better than pure chance but this season that wasn’t the case. We weren’t too far off admittedly but nevertheless we would have found more winners had we gone for overs in every game.

Finally, let’s have a look at the strike rate in the light of the average odds. If the average odds on over 2.5 goals is evens or better than our strike rate of just over 50% will generate a long-term profit. It may be a binary market with the only options being under 2.5 and over 2.5 but that certainly doesn’t mean the odds average out at evens. In fact the average odds this season was just 1.91. At those odds we require a strike rate of around 52.5% to breakeven, a couple of percentage points above what we achieved this season. Even if we settle to maximum odds the average is still short of evens at 1.98 and the breakeven SR of 50.50% is again slightly above what we recorded. The system is close to breakeven at maximum odds but we need to pick up a few more winners to have any sort of edge. On that basis then the 30pts bank would not have been large enough to follow this system in the long run as it is a losing proposition.

Detailed Analysis
A side-by-side comparison of this season and the previous five seasons broken down by month is shown in Table 3 below. Starting with the bets column you can see that this season was pretty much in line with the five-year averages. We saw earlier that the overall number of bets was very much in line with expectation and now we can see the monthly splits have gone the same way. November and April were slightly busier than average but it’s nothing too concerning. What is concerning though is the lack of winners at certain points in the season. August got us off to a decent enough start but things dried up in September. October and November were slightly down on the averages but nothing too striking given the fact that the system underperformed as a whole throughout the year. April’s SR is also rather worrying.

Table 3: Monthly Breakdown (Ave Odds)
2010/11 2005/06 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
August 53 32 60.38% 7.21 13.16% August 258 159 61.63% 51.24 19.86%
September 82 36 43.90% -11.73 -14.31% September 375 227 60.53% 68.10 18.16%
October 92 50 54.35% 2.72 2.96% October 494 301 60.93% 80.05 16.20%
November 62 31 50.00% -5.26 -8.49% November 223 134 60.09% 27.99 12.55%
December 6 2 33.33% -2.11 -35.16% December 24 17 70.83% 8.77 36.54%
January 1 1 100.00% 0.67 67.35% January 13 9 69.23% 3.99 30.69%
February 0 0 0.00% 0.00 0.00% February 11 8 72.73% 5.18 47.09%
March 3 2 66.67% 1.12 37.22% March 10 7 70.00% 4.20 42.00%
April 28 12 42.86% -5.88 -21.00% April 93 54 58.06% 10.26 11.03%
May 5 1 20.00% -3.14 -62.84% May 33 19 57.58% 2.22 6.73%

It seems things were almost entirely downhill following August’s promising start. September took away all previous gains and then some and things don’t really seem to have recovered from that point. The bank went into the red on 21st September and never made it back into the black. Sure, there were a few months were we picked up a small profit but it wasn’t enough to make that much of a dent in the losses. The system was developed to generate the bulk of its profits in the first few months of the season but rather than picking up approximately 45pts as should happen in an average August to November period the system actually lost 7.06pts. A negative swing of over 50pts in those opening months is always going to be tricky to recover from and so it proved.

Now let’s think about how the performance varied by division. Table 4 shows a divisional breakdown using the same format as the previous table with this season’s figures on the left and the five-year totals on the right. At first glance it seems like the system did its job exactly as planned in League One while it bombed in all other divisions but appearances can be misleading. Ask yourself why the system should work for League One matches but not the other divisions. All teams are playing the same game to the same rules so there is no rational explanation for it. Yes, the system is the amalgamation of several smaller trend-based systems some of which may favour League One but that still doesn’t make much difference. It would be wrong to just follow the League One bets next season based on a decent year this time round without first looking at the fundamental reasons for success in that division only.

Table 4: Divisional Breakdown (Ave Odds)
2010/11 2005/06 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
Premiership 70 32 45.71% -10.60 -15.14% Premiership 346 210 60.69% 60.88 17.60%
Championship 98 48 48.98% -7.51 -7.66% Championship 345 206 59.71% 49.66 14.39%
League One 110 68 61.82% 19.24 17.49% League One 640 392 61.25% 109.42 17.10%
League Two 54 19 35.19% -17.54 -32.48% League Two 203 127 62.56% 42.04 20.71%

I always look at the average number of bets and winners on these breakdowns and this is no exception. There were more Championship bets than I would have expected based on the five-year average of 69 bets per season. There were around 18 fewer bets on League One games than in an average season but this is more or less countered by the difference in the actual and expected numbers of League Two bets. The Premiership figure was right on the money. League One was the only division whose SR was even close to what was expected with all other divisions, and League Two in particular, falling well short of the anticipated number of winning bets. It’s hardly surprising then that only League One turned a profit. What is slightly remarkable is that the ROI for the League One bets was very close to the five-year average. Throughout these system reviews when there has been a profit it has generally been at a rate much lower than the expected value. League Two’s lack of winners is the driving force behind the large difference between the expected and actual ROIs.

Close but no cigar would be one way of summing this system up. When the performance is broken down there are flashes of the desired performance including August’s profits and the overall performance of bets on League One games but generally speaking we missed the mark. The strike rate was a few percent down on what was required to break even and around 10% lower than the five-year average. That’s a significant difference and will obviously make profiting from binary markets such as the under/overs very difficult. Even shopping around for maximum odds didn’t quite tip the balance although it did greatly increase the ROI so is a strategy that shouldn’t be ignored.

As with the previous review (Under Form Portfolio System)  I wasn’t aware of the goal trends highlighted in Table 2 until putting these reviews together. It is certainly data I should have been aware of during the development phase and I am annoyed it passed me by. As with the previous system though, that certainly won’t be the biggest mistake I made during the development process and I must learn the lessons and come back stronger. This system was made to look better than it might have done by that trend for an increasing average number of goals and in previous seasons the performance may have appeared a lot worse. There is a slight inclination to run this system again without change next year to try to ride that trend into profit but I shan’t do that without first deconstructing the system and examining it for weaknesses that can be addressed and a better product put forward.

Under Form Portfolio System Performance Review

The Under Form Portfolio System uses form-based trends to identify matches that offer value in a bet on there being under 2.5 goals by full-time. At least that’s the theory. An average season will see approximately 480 bets placed with a return of around 70pts at a ROI close to 15%.

Performance Summary
Table 1 (below) provides a summary of the Under Form Portfolio System’s performance this season along with figures from backtesting over the previous five seasons. Starting with the number of bets, it seems that 2010/11 was very much in line with previous seasons. There were 462 bets this season compared to an average of 478 in each of the previous five years. Even allowing for the period during which I was unavailable the number of qualifiers is perfectly normal with previous seasons seeing between 452 and 529 bets. However, the number of winners recorded this season is way down on previous years, something that seems to have become somewhat of a recurring theme throughout these performance reviews. The strike rate of 43.51% is over 20% down on the five-year average. The previous lowest SR was 62.63% from the 2007/08 season so this year was a long way off that even. To put this year’s SR into context it is equivalent to 95 fewer winners than in an average season. Worse still the strike rate is nearly 10% below the level that would be achieved by backing all games to end with under 2.5 goals or to put it another way the system did far worse than a monkey with a pin would.

Table 1: Summary Stats
2010/11 2005/06 – 2009/10
Bets Winners SR   Bets Winners SR  
462 201 43.51%   2393 1533 64.06%  
Ave Odds Max Odds Ave Odds Max Odds
Profit ROI Profit ROI Profit ROI Profit ROI
-84.12 -18.21% -69.48 -15.03% 361.42 15.10%

An average season nets around 72pts profit, although this figure has varied from 58.58pts to 86.57pts in recent years. Either way it is a long way ahead of this season’s actual returns: a loss of 84.12pts to average odds. Settling all bets to the maximum available odds improves the returns by 15pts and adds 3% to the ROI but we are still talking about a loss of close to 70pts at a rate of over 15%. The system was designed to return a profit of around 15% of stakes, not a loss of around the same amount. Something has gone badly wrong here!

In the past five seasons this system has been able to rack up a string of 20 consecutive winning bets earning 16.42pts profit in the process. The worst run saw a loss of 9pts across seven bets. This season there was a streak of 11 consecutive losers taking 16pts out of the bank while the longest winning run was only six bets long and made only 7.59pts profit. When the longest losing run is more than 50% longer than the previous longest you know you’re in trouble. Combine that with a winning run that couldn’t exceed a third of the length of the previous best and you’re in deep.

Where did all those winners go to? In an attempt to answer that I have put together the data in Table 2 below. The first thing to notice is the year-on-year increase in the average number of goals per game, a trend that saw the average increase by more than a quarter of a goal in the past six seasons. It may not sound like a great deal but it equates to an increase of 10% over that period. You can see what effect this increase has had on the chances of a match going under.over 2.5 goals in the table below. As well as the average number of goals per game increasing every season, the number of games that had ended with over 2.5 goals has increased each year, obviously meaning the number of unders has decreased annually. This has not been a slow transition either. Back in 2005/06 nearly 55% of games finished with fewer than 2.5 goals, there was close to parity in 2009/10 and the trend reversed last season with more games going over the line than under it. That’s a worrying statistic for a system that focuses on under 2.5 goals bets.

Table 2: Under/Over 2.5 Goals Data
  Ave Goals Under 2.5 Over 2.5
05/06 2.48 1116 54.81% 920 45.19%
06/07 2.51 1099 53.98% 937 46.02%
07/08 2.53 1089 53.49% 947 46.51%
08/09 2.55 1069 52.50% 967 47.50%
09/10 2.64 1034 50.79% 1002 49.21%
10/11 2.74 984 48.33% 1052 51.67%

It’s not even as though the average odds have really changed in line with the above trends either. Back in 2005/06 the average odds for a bet on under 2.5 goals was 1.78 compared to 1.94 on over 2.5 goals. OK, last season the averages were 1.86 for the unders and 1.91 on the overs. There have been a few slight variations in the intervening years but nothing to really write home about. So the odds on an unders bet is, on average, shorter than that for an overs bet despite the fact that the average number of goals is on the rise and for the first time a match is more likely to end with over 2.5 goals than go under that mark. Hmmm, something’s not right there.

So the fact my SR is so low compared to previous seasons is perhaps due in part to the fact that fewer than half of all games ended under the goal line. If the average goals per game count is going up then obviously winners on the unders market are going to be harder to come by. That said I still finished over 5% lower than the SR I would have attained by backing unders in all matches so there are still major flaws in the system. A strike rate as low as the one I recorded this season requires average odds of 2.30 in order to breakeven. And as we saw earlier the average under 2.5 odds this season was only 1.86. There was no way this system could possibly return a profit on that basis.

A look at the staking of this system now. The previous average stake was 1.17pts per bet with a maximum stake of 3pts. A stake larger than a single point was employed in 16.14% of matches. This season a total of 462pts were staked on 377 games for an average stake of 1.23pts while multi-point stakes were required for 20.16% of matches with one game calling for a stake of 4pts to be put down (the bet lost). A greater average stake, a higher frequency of larger bets and a new record maximum stake – the staking certainly seems slightly out of kilter this year, and when combined with the lack of winners we have already observed we have a surefire recipe for disaster.

Finally it is time to review the advised betting bank figure. Based on the previous largest drawdown of 9pts a small bank of 30pts was advised despite the fact that there had previously been a point whereby an outlay of 33pts had been called for. Given the figures we have already seen regarding the strike rate and average odds it is clear that in the long run that no bank would have been big enough as the system was destined for heavy losses. As it happens the system losses exceeded 20pts for the first time on New Year’s Day and the system never recovered. This season saw a maximum exposure of 34pts due to concurrent bets so at least something remained in line with previous seasons.

It may be pushing it slightly to call that a summary but never mind.

Detailed Analysis
Table 3 below shows a monthly comparison of this season’s performance compared to the total performance over the previous five seasons. This season’s data on the left-hand side has a lot of red profit figures in there, as one might reasonably expect knowing that the season as a whole resulted in heavy losses. The losses built right from the start of the season although January’s losses really do take the biscuit when you consider I was out of action and not posting bets for part of the month.

The number of bets placed each month seems to have followed the pattern of previous seasons, starting with very few bets before building to a crescendo after christmas. This season August, September and October saw an average number of bets but November and December were quieter than expected. Based on previous seasons one would have expected around 10 bets in November and around 37 in December. The rest of the season was very much in line with expected bet numbers though.

However, the number of winners is way down on the anticipated numbers in all month with the possible exception of February, although even then we are probably a few short of where we should have been. The quiet months at the start of the season can be excused on the basis of small samples that could easily be distorted by single games and even odd goals here and there but from new year onwards serious questions must be asked. January’s strike rate was less than half the expected figure and we certainly felt the effects with huge losses. Things improved slightly in February, as I have already touched on, but it was a poor run in to the end of the season still.

Table 3: Monthly Breakdown (Ave Odds)
2010/11 2005/06 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
August 4 2 50.00% -0.69 -17.15% August 36 24 66.67% 5.52 15.33%
September 9 0 0.00% -9.00 -100.00% September 38 25 65.79% 6.02 15.84%
October 9 4 44.44% -1.79 -19.85% October 41 30 73.17% 13.16 32.10%
November 2 1 50.00% -0.12 -5.86% November 50 31 62.00% 6.09 12.18%
December 19 6 31.58% -7.46 -39.26% December 184 113 61.41% 20.81 11.31%
January 110 34 30.91% -46.52 -42.29% January 529 334 63.14% 82.56 15.61%
February 128 71 55.47% 7.67 5.99% February 647 407 62.91% 82.76 12.79%
March 74 37 50.00% -5.85 -7.91% March 300 205 68.33% 62.21 20.74%
April 82 39 47.56% -8.41 -10.26% April 398 255 64.07% 47.98 12.06%
May 25 7 28.00% -11.96 -47.85% May 170 109 64.12% 34.31 20.18%

It’s not entirely clear why the results fell as they did this season. It’s easy to blame the weather but there is no hard and fast evidence that it was really a factor. December’s big freeze meant some games that were scheduled for then didn’t happen till later in the season but that shouldn’t really have mattered. We have already seen that there is an underlying trend for more goals these days, even compared to just a few years back, and that is likely to impact systems selecting under 2.5 goals bets but again I can’t lay the blame firmly at that door either, especially when you look at how rapidly the wheels fell off in January.

Let’s have a quick look at how the returns each month compared to previous seasons. It is easy to excuse the first four months of the season as I said earlier because the odd goal here and there can distort such small samples although September delivering a blank certainly wasn’t part of the plan. December also offers up a small sample size but rather than losing close to 7.5pts we should have picked up a few points profit based on the five-year averages. The things really explode. Rather than netting around 16pts as one would expect from previous years January saw a loss of over 46pts. That’s a swing of more than 60pts against us and there is just no recovering from incidents like that. February showed a profit this year but was still around 9pts shy of the average figure. The season ended with three months each of which delivered returns that were approximately 18pts below the expected figures. The system underachieved right from the start really but as soon as the bets increased in number the difference between actual and expected profits widened.

A breakdown of this season’s performance by division is shown on the left of Table 4 with the right hand side providing aggregate data from the previous five seasons for comparison. The striking thing about this table is the fact that the system shows heavy losses in all four divisions. There are double-digit losses at all levels and double figure negative ROIs to go with them. This is obviously in complete contrast to previous years where each division has recorded positive double-digit ROIs. If we look at the strike rates recorded in each division we see they are all under 50% this season whereas the five-year averages are all in excess of 60%. Obviously that means we have a serious lack of winners as we have already seen several times in this review. To put that into context somewhat though the season ended with around half as many winners in the Premiership as one would expect on average. The Championship was ‘missing’ over 17 winners,  we were owed around 25 more League One winners and in League Two we fell short by nearly 30. In terms of bet numbers though things weren’t too far off the mark this season. An average season would have seen a few more bets in the Premiership and Championship than we actually saw but the lower divisions were certainly in the right ballpark.

Table 4: Divisional Breakdown (Ave Odds)
2010/11 2005/06 – 2009/10
  Bets Winners SR Profit ROI   Bets Winners SR Profit ROI
Premiership 50 16 32.00% -19.86 -39.71% Premiership 313 201 64.22% 51.64 16.50%
Championship 85 37 43.53% -17.52 -20.61% Championship 514 329 64.01% 60.21 11.71%
League One 168 75 44.64% -24.67 -14.69% League One 824 531 64.44% 138.23 16.78%
League Two 159 73 45.91% -22.08 -13.89% League Two 742 472 63.61% 111.34 15.01%

I’ve already briefly touched on the double-figure losses and ROIs but I thought it might be useful to make it clear how poorly this system performed compared to the five-year averages. The returns from bets on Premiership matches were around 30pts lower than expected. That’s a 30pt difference from just 50 bets which I find quite staggering. The same difference can be seen between actual and expected returns for the Championship but from more bets while League Two came out 45pts under the expected returns and it was a massive 50pt difference for League One. The top two divisions show a smaller points difference but as there were fewer bets the ROI differences are actually much worse than for League One and League Two.

I don’t think it’s too strong to call this an unmitigated disaster! The average odds were always going to come out a good shade below evens due to the markets we’re betting in which means a strike rate up around 60% is a must. For this system to end with an SR of 43.51% is nothing short of embarrassing really. That kind of deficit is not down to an unlucky season and were I to run the system again next season I don’t see any reason why it wouldn’t also end in disaster. There is something fundamentally wrong here. I have to look back at those five-year averages to see how the data was compiled and how the system was constructed to identify where it went wrong but I suspect it won’t take too much effort to identify flaws with the development process given the disparity in actual and expected performances.

I must admit too that I wasn’t aware of the goals trends I highlighted earlier in this review until it came to compiling the data for Table 2. I certainly didn’t spot the year-on-year increase in the average number of goals per game or the trend towards over 2.5 goals while I was developing the system. That’s probably a mistake but almost certainly not the biggest mistake I made during that process. It just shows that you need to be fully aware of your data and check it from numerous angles in case you miss anything during previous analysis.