After compiling Syntaxnet (old version with bazel 0.5.x) with GPU support, you will find this error message.
E tensorflow/stream_executor/cuda/cuda_driver.cc:965] failed to allocate xxxG (xxxxxxx bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
E tensorflow/stream_executor/cuda/cuda_driver.cc:965] failed to allocate xxxG (xxxxxxx bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
No, no. It is not because your GPU memory is not enough. It is because sometimes tensorflow eats all of your GPU memory. So, what do you have to do? You just have to modify the main function of this file.
models/research/syntaxnet/syntaxnet/parser_eval.py
gpu_opt = tf.GPUOptions(allow_growth=True) with tf.Session(config=tf.ConfigProto(gpu_options=gpu_opt)) as sess: Eval(sess)
Or you can download the patch here.
Now you only use about 300 MB of your GPU memory to run Syntaxnet 😀
Reference: Github