It really was a close run thing.
I have analysed the difference between my predictions and the A.I. formula.
Here are the results per division and the average difference in place per team over all 5 divisions.
| Division | Human | A.I. |
| 1 | 64 | 68 |
| 2 | 112 | 112 |
| 3 | 76 | 78 |
| 4 | 112 | 118 |
| 5 | 110 | 102 |
| total | 474 | 478 |
| Position | 3.95 | 3.98 |
So what does this mean , my predictions were out by around 3.95 per team , the A.I. by 3.98 per team. So I did slightly better.
D1 and 3 were the most accurate divisions, it was trained using D3 data.
So what to conclude and next steps!
The A.I. did really well considering. Next season I will test using the A.I. formula on the main categories only GK,DEF,MID,ATT, Top18 and AVE alongside my own. I will also tweak the A.I. formula to create a formula for each division to see if it would have done any better, if it would not have made any difference I won’t worry. I will then look at introducing the depth and balance categories the A.I. suggested while keeping the point system the same for consistency.



Leave a comment