00974b0f11
Added '--play' flag, so you can now play against the ai.
2018-05-14 13:07:48 +02:00
926a331df0
Some flags from main.py is gone, rolls now allow a face_value of 0 yet
...
again and it is possible to play against the ai. There is no flag
for this yet, so this has to be added.
2018-05-13 23:54:13 +02:00
ba4ef86bb5
Board rep can now be inferred from file after being given once.
...
We can also evaluate multiple times by using the flag "--repeat-eval".
The flag defaults to 1, if not provided.
2018-05-12 12:14:47 +02:00
504308a9af
Yet another input argument, "--ply", 0 for no look-ahead, 1 for a single
...
look-ahead.
2018-05-10 23:22:41 +02:00
9cfdd7e2b2
Added a verbosity flag, --verbose, which allows for printing of
...
variables and such.
2018-05-10 10:39:22 +02:00
6429e0732c
We should now be able to both train and eval as per usual.
...
I've added a file "global_step", which works as the new global_step
counter, so we can use it for exp_decay.
2018-05-09 23:15:35 +02:00
ac6660e05b
Added board-rep as cli argument, to state which input-board-rep to use.
...
Also fixed weird nesting of difference_in_values.
2018-05-06 20:52:35 +02:00
1f8485f54e
No longer use n_ply, shit's too slow man.
...
Added extra logging, now logs the average difference in values
between trainings.
Also fixed bug with the length of quack-norm.
Also added cli argument; use-baseline, if set, the baseline-model
will be used.
2018-05-06 20:41:07 +02:00
695a3d43db
Fixed n_ply and actually added a comma in main.py. *clap Christoffer*
2018-05-01 20:39:29 +02:00
9428a00c11
add "--force-creation" flag to force model creation
2018-04-26 11:43:19 +02:00
33a4b0db3c
disallow using model "baseline"
2018-04-24 21:16:54 +02:00
fda2c6e08d
parametric board representation in network
2018-03-28 12:00:47 +02:00
4c43bf19a3
Add evaluation variance benchmark
...
To do a benchmark for `pubeval`, run `python3 main.py --bench-eval-scores
--eval-methods pubeval`
Logs will be placed in directory `bench`
Use `plot_bench(data_path)` in `plot.py` for plotting
2018-03-26 16:45:26 +02:00
98c9af72e7
rework network
2018-03-22 15:30:47 +01:00
b7e6dd10af
move evaluation code into network.py
2018-03-20 13:17:38 +01:00
99783ee4f8
clean up and move things to network.py
2018-03-20 13:03:21 +01:00
2fc7a2a09c
fixed dumb bugs; still messy
2018-03-14 20:42:09 +01:00
81f8db35f4
clean up
2018-03-12 15:18:44 +01:00
55898d0e66
renaming parameters
2018-03-12 00:11:55 +01:00
b07f075627
training and evaluation stats are now logged by default to model/logs/
2018-03-10 00:39:55 +01:00
9bc1a8ba9f
save and restore number of trained episodes
2018-03-10 00:22:20 +01:00
fc88c64452
train and eval now outputs proper number of training episodes to log
2018-03-09 21:05:38 +01:00
f3f0e40aa4
small fixes
2018-03-08 17:51:32 +01:00
150036a6cb
plot-plot
2018-03-08 17:13:25 +01:00
30183448ec
woooow
2018-03-08 16:27:16 +01:00