Commit Graph

13 Commits

Author SHA1 Message Date
4c43bf19a3
Add evaluation variance benchmark
To do a benchmark for `pubeval`, run `python3 main.py --bench-eval-scores
--eval-methods pubeval`

Logs will be placed in directory `bench`

Use `plot_bench(data_path)` in `plot.py` for plotting
2018-03-26 16:45:26 +02:00
98c9af72e7 rework network 2018-03-22 15:30:47 +01:00
b7e6dd10af move evaluation code into network.py 2018-03-20 13:17:38 +01:00
99783ee4f8 clean up and move things to network.py 2018-03-20 13:03:21 +01:00
2fc7a2a09c fixed dumb bugs; still messy 2018-03-14 20:42:09 +01:00
81f8db35f4
clean up 2018-03-12 15:18:44 +01:00
55898d0e66 renaming parameters 2018-03-12 00:11:55 +01:00
b07f075627 training and evaluation stats are now logged by default to model/logs/ 2018-03-10 00:39:55 +01:00
9bc1a8ba9f save and restore number of trained episodes 2018-03-10 00:22:20 +01:00
fc88c64452 train and eval now outputs proper number of training episodes to log 2018-03-09 21:05:38 +01:00
f3f0e40aa4
small fixes 2018-03-08 17:51:32 +01:00
150036a6cb
plot-plot 2018-03-08 17:13:25 +01:00
30183448ec woooow 2018-03-08 16:27:16 +01:00