Commit Graph

18 Commits

Author SHA1 Message Date
1f8485f54e No longer use n_ply, shit's too slow man.
Added extra logging, now logs the average difference in values
between trainings.
Also fixed bug with the length of quack-norm.
Also added cli argument; use-baseline, if set, the baseline-model
will be used.
2018-05-06 20:41:07 +02:00
695a3d43db Fixed n_ply and actually added a comma in main.py. *clap Christoffer* 2018-05-01 20:39:29 +02:00
9428a00c11 add "--force-creation" flag to force model creation 2018-04-26 11:43:19 +02:00
33a4b0db3c disallow using model "baseline" 2018-04-24 21:16:54 +02:00
fda2c6e08d parametric board representation in network 2018-03-28 12:00:47 +02:00
4c43bf19a3
Add evaluation variance benchmark
To do a benchmark for `pubeval`, run `python3 main.py --bench-eval-scores
--eval-methods pubeval`

Logs will be placed in directory `bench`

Use `plot_bench(data_path)` in `plot.py` for plotting
2018-03-26 16:45:26 +02:00
98c9af72e7 rework network 2018-03-22 15:30:47 +01:00
b7e6dd10af move evaluation code into network.py 2018-03-20 13:17:38 +01:00
99783ee4f8 clean up and move things to network.py 2018-03-20 13:03:21 +01:00
2fc7a2a09c fixed dumb bugs; still messy 2018-03-14 20:42:09 +01:00
81f8db35f4
clean up 2018-03-12 15:18:44 +01:00
55898d0e66 renaming parameters 2018-03-12 00:11:55 +01:00
b07f075627 training and evaluation stats are now logged by default to model/logs/ 2018-03-10 00:39:55 +01:00
9bc1a8ba9f save and restore number of trained episodes 2018-03-10 00:22:20 +01:00
fc88c64452 train and eval now outputs proper number of training episodes to log 2018-03-09 21:05:38 +01:00
f3f0e40aa4
small fixes 2018-03-08 17:51:32 +01:00
150036a6cb
plot-plot 2018-03-08 17:13:25 +01:00
30183448ec woooow 2018-03-08 16:27:16 +01:00