Removing Stop Words

I have a task to do word counting for some articles. The detail of the task is getting the list of id from Elasticsearch, get the content from ArangoDB, then do some text processing to clean the content and counting the word frequency.

After did it with Scala, Go, and Python, I found out that it is very slow when I am doing it with Python. Doing it with Scala and Go only take around 3-4 seconds to process 12,563 articles. But when we do with Python, it takes around 15-18 seconds. And after do some profiling, finally I found out that it is very slow to remove any stopwords from big number of articles. I am using common method in Python to remove the stopwords.

stopwords = open('stopwords.txt').read().splitlines()
word_count = Counter([i for i in all_words.split() if len(i) > 4 if i not in stopwords])

It needs around 16 seconds to do the stopwords removing and counting with Counter function.

$ time python counting.py --start 2018-09-01 --end 2018-09-30
'get_es_data'  802.65 ms
'get_arango_data'  449.94 ms
'clean_texts'  1286.84 ms
'word_count'  13980.26 ms
Total articles: 12563
Top words: ['jokowi', 'presiden', 'indonesia', 'ketua', 'partai', 'prabowo', 'widodo', 'jakarta', 'negara', 'games']
Total time execution (in seconds): 16.5261652469635

real    0m16.647s
user    0m15.668s
sys     0m0.288s

Then after reading some methods about how some people do text processing, I found this good article. It is said that Python dictionaries use hash tables, this means that a lookup operation (e.g., if x in y) is O(1). A lookup operation in a list means that the entire list needs to be iterated, resulting in O(n) for a list of length n.

So I try to convert the stopwords list to dictionaries.

stopwords = open('stopwords.txt').read().splitlines()
stop_dicts = dict.fromkeys(stopwords, True)
words = Counter([i for i in words.split() if len(i) > 4 if i not in stop_dicts])

Then we will get the faster result than the previous one with the list type.

$ time python counting.py --start 2018-09-01 --end 2018-09-30
'get_es_data'  787.50 ms
'get_arango_data'  461.97 ms
'clean_texts'  1311.56 ms
'word_count'  785.08 ms
Total articles: 12563
Top words: ['jokowi', 'presiden', 'indonesia', 'ketua', 'partai', 'prabowo', 'widodo', 'jakarta', 'negara', 'games']
Total time execution (in seconds): 3.3524017333984375

real    0m3.503s
user    0m2.560s
sys     0m0.220s

It only takes 3 seconds to do all process. It is very fast, isn’t it? 😀

Continue Reading

Tensorflow GPU Memory Usage (Using Keras)

I am using Keras with Tensorflow backend for my project. In the beginning, when using Tensorflow backend, I am a bit surprised when seeing the memory usage. Although my model size is not more than 10 MB, It is still using all of my GPU memory.

After reading tensorflow documentation, I found out, that by default, TensorFlow maps nearly all of the GPU memory. The purpose is to reduce the memory fragmentation. And also, from the documentation, I know there are two different approaches that can be used to handle this situation.

The first method is limiting the memory usage by percentage. So, for example, you can limit the application just only use 20% of your GPU memory. If you are using 8GB GPU memory, the application will be using 1.4 GB. (I am using Keras, so the example will be done in Keras way)

import tensorflow as tf
from keras.backend.tensorflow_backend import set_session

config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.2
set_session(tf.Session(config=config))

After setting the configuration, we could see the difference.

It only uses 1.6 GB of memory. But, for this configuration, we only limit by percentage, and the application still use 100% of the limitation. We cannot know how much exactly the memory usage of the application. So we will try the second method.

The second method is using allow_growth. This method will make the application allocate only as much GPU memory based on runtime allocation. So, the application will be using the GPU memory as needed.

import tensorflow as tf
from keras.backend.tensorflow_backend import set_session

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)

And this is the result.

So, after trying two of these methods, which method will you use for your application? 😀

Continue Reading

Syntaxnet with GPU Support

After compiling Syntaxnet (old version with bazel 0.5.x) with GPU support, you will find this error message.

E tensorflow/stream_executor/cuda/cuda_driver.cc:965] failed to allocate xxxG (xxxxxxx bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
E tensorflow/stream_executor/cuda/cuda_driver.cc:965] failed to allocate xxxG (xxxxxxx bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

No, no. It is not because your GPU memory is not enough. It is because sometimes tensorflow eats all of your GPU memory. So, what do you have to do? You just have to modify the main function of this file.

models/research/syntaxnet/syntaxnet/parser_eval.py

gpu_opt = tf.GPUOptions(allow_growth=True)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_opt)) as sess:
    Eval(sess)

Or you can download the patch here.

Now you only use about 300 MB of your GPU memory to run Syntaxnet 😀

Reference: Github

Continue Reading

Toml Config Value by Name (Go)

I use https://github.com/BurntSushi/toml for parsing the configuration file. Because the library use the struct type for the configuration variable, we have to access the struct fields with a dot to get a value.

config.Database.Username

But I want to do it dynamically. I want to create a function that receive a string type key and return the configuration value. So I do it with this way.

package main

import (
	"fmt"
	"os"
	"reflect"
	"strings"
	"github.com/BurntSushi/toml"
)

type tomlConfig struct {
	Database databaseInfo
	Title    string
}

type databaseInfo struct {
	Username string
	Password string
}

// Function to get struct type variable by index name
func GetField(t *tomlConfig, field string) string {
	r := reflect.ValueOf(t)
	f := reflect.Indirect(r)

	splitField := strings.Split(field, ".")
	for _, s := range splitField {
		f = f.FieldByName(strings.Title(s))
	}

	return f.String()
}

// Function to get config by string
func GetConfig(key string) string {
	var config tomlConfig
	if _, err := toml.Decode(
		`
		title = "Text title"

		[database]
		username = "root"
		password = "password"
		`, &config); err != nil {
		fmt.Println("Please check your configuration file")
		os.Exit(1)
	}

	configValue := GetField(&config, key)

	return configValue
}

func main() {
	fmt.Println(GetConfig("testKey"))
	fmt.Println(GetConfig("database.testKey"))
	fmt.Println(GetConfig("title"))
	fmt.Println(GetConfig("database.username"))
	fmt.Println(GetConfig("database.password"))
}

Result:

$ go run main.go 
<invalid Value>
<invalid Value>
Text title
root
password
Continue Reading
1 2 3 17