At this point, we should now have a selection system
As was stated earlier, even a bad selection system is better than no selection system at all.
The reason for this is that the results of even a bad system can be analysed, and valid conclusions drawn, whereas it is pointless analysing the results of a totally random system since the results of the analysis will also be totally random.
As a result, any conclusions drawn from the analysis will be irrelevant.
So, how do we go about analysing the past results of a system?
Firstly, we need to record some basic facts about each of the horses identified by the selection system.
I would suggest that at least the following facts are recorded:
• Name of horse • Date of race • Name of track • Going • Jockey • Trainer • Draw • Weight • Type of race • Number of horses in race • Starting price • Position in betting at start of race • Stake • Odds • Result of race • Finishing position of selection • Name of winner • Winning jockey• Winning trainer • Weight of winner • Starting price of winner
Additional information may also be recorded since the above is only the bare minimum.
Some examples of additional information that you may wish to record are:
• Whether or not the selection was wearing headgear e.g. blinkers, cheek-pieces etc. • Whether the selection drifted/shortened in the pre-race betting market
I’m sure that you can think of additional data which you feel may be worthy of recording.
Having decided which data items will be recorded, a method will then be needed to store the information in such a way that it can be analysed as quickly, efficiently and as accurately as possible.
Although this can be achieved using pencil and paper, it is probably not the most efficient method available to us since it is slow, time-consuming and prone to error.
The best method, in my opinion, is to use a PC and a commercially-available spreadsheet application such as Microsoft Excel or, preferably, a commercially-available database such as Microsoft Access.
If you are unfamiliar with the use of PCs, spreadsheets and databases, it is strongly recommended that you become familiar with them since doing so will pay huge dividends.
Once the data has been entered into a PC, the analysis of the past results can begin.
This will allow which bets are profitable, and those which are not, to be determined.
A simple, but effective, way to achieve this is to sort the whole of the data by each variable (Jockey, trainer, weight etc.) in turn. This will allow any trends that exist to be identified.
For example, we may notice that the selections associated with a particular trainer usually win or that those selections ridden by a particular jockey usually lose.
The identification of such trends will help us to create ‘Filters’ for our system. Filters are more fully described in chapter 5 of this book.
Essentially, a filter is a rule that is applied to a selection system which attempts to eliminate potential future losing bets and thus improve the strike rate and profitability of a system.
By way of an example, when I analysed the past results for one of my laying systems, I found that the profitability was abysmal at three tracks in particular: Folkstone, Wolverhampton and Leicester.
The profitability at these tracks was not only negative, the losses far exceed those of any other track. I, therefore, stopped laying horses running in races at these tracks. As a result, the profitability of the system improved significantly.
I have yet to determine why the system was ineffective at these tracks but, because I was able to analyse the past results, I was able to identify one source of my losses and was thus able to take effective action.
Even though I ceased laying horses running in races at these three tracks, I continued to monitor the bets that I would have placed had I not ceased.
Over time, the profitability at Wolverhampton began to improve significantly. I therefore began, once again, laying horses running in races there. As a result, I am now enjoying profitable laying at Wolverhampton.
Sadly, the profitability at Folkstone and Leicester remains gloomy and I still don’t lay horses running in races there.
I do, however, continue to monitor the situation since I will then be in a position to take advantage of any positive changes which take place.
The analysis of my past results also revealed that the system was extremely effective at three tracks in particular; Lingfield, Southwell and Exeter.
Although I have yet to determine the reasons why I take full advantage of this fact and slightly increase my stakes when betting on races held at these three tracks.
Again, as a result, I have been able to significantly improve the profitability of my system.
The above illustrates four additional points:
Firstly, it is worth collecting and analysing data relating to bets that would otherwise have been placed since it allows the initial conclusions reached to be re-assessed.
This allows us to determine if it is a short-lived, or a permanent, feature.
In the above examples, Wolverhampton appears to have been a short-lived phenomena whereas Leicester and Folkstone appear to be permanent.
Secondly, one can never have too much data. It is always worth collecting and analysing more.
The more data is collected and analysed, the more accurate the results of the analysis will be.
My advice is to collect as much data and to analyse it as frequently and in as many ways as possible. You will never know what you might discover or how useful the discoveries will be until the data is analysed.
Thirdly, be aware that analysing small amounts of data can give rise to invalid conclusions and that the smaller the amount of data, the more likely it is that the conclusions will be invalid.
This point is more fully covered in a future article.
Let us take a case in point i.e. that of Wolverhampton, Leicester and Folkstone.
The initial analysis revealed that laying horses running in races at these tracks was unprofitable.
The analysis of additional data revealed that the initial conclusion reached for Leicester and Folkstone were correct but that the conclusion reached for Wolverhampton was incorrect.
Note that had I not analysed my past betting activities, I would have continued to lose money at both Leicester and Folkstone since I would have been blissfully unaware.
Sadly, my bank manager would eventually have noticed!
Fourthly, there is a saying: ‘Paralysis by Analysis’. In other words, one can spend so much time collecting and analysing data that there is too little time left to do anything with the results.
Under normal circumstances, I would agree with the sentiment of this saying. However, in this case, we saw in the previous articles on bookmakers and betting exchanges that the odds are heavily stacked against us punters and we need whatever help that we can find to overcome the odds.
The collection and analysing of data is one of the few ways that we punters have of overcoming the odds against us.
We should, therefore, use to collect and analyse our data.
One point worthy of note, before we bring this article to an end, is that just because an event has frequently occurred in the past, there is no guarantee that it will continue to occur, with the same frequency, in the future.
The conclusions drawn from the analysis of past data should therefore be treated as a ‘guide’ only, rather than as a set of rules to be slavishly obeyed.
So, we now know why it is important that the results of our past betting activities should be stored and analysed. It will help us to improve the strike rate and hence the profitability of our system.