A Julia wrapper for TensorFlow


Build Status codecov.io

A wrapper around TensorFlow, a popular open source machine learning framework from Google.


Documentation available here.

Why use TensorFlow.jl?

See a list of advantages over the Python API.

What’s changed recently?


Basic usage

using TensorFlow

sess = TensorFlow.Session()

x = TensorFlow.constant(Float64[1,2])
y = TensorFlow.Variable(Float64[3,4])
z = TensorFlow.placeholder(Float64)

w = exp(x + z + -y)

run(sess, TensorFlow.global_variables_initializer())
res = run(sess, w, Dict(z=>Float64[1,2]))
[email protected] res[1] ≈ exp(-1)


Install via


To enable support for GPU usage (Linux only), set an environment variable TF_USE_GPU to “1” and then rebuild the package. eg

ENV["TF_USE_GPU"] = "1"

CUDA 8.0 and cudnn are required for GPU usage. If you need to use a different version of CUDA you can compile libtensorflow from source

Installation via Docker

Simply run docker run -it malmaud/julia:tf to open a Julia REPL that already has TensorFlow installed:

julia> using TensorFlow

For a version of TensorFlow.jl that utilizes GPUs, use nvidia-docker run -it malmaud/jula:tf_gpu. Download nvidia-docker if you don’t already have it.

Logistic regression example

Realistic demonstration of using variable scopes and advanced optimizers

using Distributions

# Generate some synthetic data
x = randn(100, 50)
w = randn(50, 10)
y_prob = exp(x*w)
y_prob ./= sum(y_prob,2)

function draw(probs)
    y = zeros(size(probs))
    for i in 1:size(probs, 1)
        idx = rand(Categorical(probs[i, :]))
        y[i, idx] = 1
    return y

y = draw(y_prob)

# Build the model
sess = Session(Graph())
X = placeholder(Float64)
Y_obs = placeholder(Float64)

variable_scope("logisitic_model", initializer=Normal(0, .001)) do
    global W = get_variable("weights", [50, 10], Float64)
    global B = get_variable("bias", [10], Float64)

Y=nn.softmax(X*W + B)
Loss = -reduce_sum(log(Y).*Y_obs)
optimizer = train.AdamOptimizer()
minimize_op = train.minimize(optimizer, Loss)
saver = train.Saver()
# Run training
run(sess, global_variables_initializer())
checkpoint_path = mktempdir()
info("Checkpoint files saved in $checkpoint_path")
for epoch in 1:100
    cur_loss, _ = run(sess, vcat(Loss, minimize_op), Dict(X=>x, Y_obs=>y))
    println(@sprintf("Current loss is %.2f.", cur_loss))
    train.save(saver, sess, joinpath(checkpoint_path, "logistic"), global_step=epoch)


If you see issues from the ccall or python interop, try updating TensorFlow both in Julia and in the global python install:

julia> Pkg.build("TensorFlow")
$ pip install --upgrade tensorflow

Optional: Building the TensorFlow library

If you want to build your own version of the TensorFlow binary library instead of relying on the one that is installed with Pkg.build("TensorFlow"), follow the instructions from https://www.tensorflow.org/install/install_sources, except:

  • In the section “Build the pip package”, instead run bazel build --config=opt //tensorflow:libtensorflow.so.
  • Then copy the file “bazel-bin/tensorflow/libtensorflow.so” to the “deps/usr/bin” directory in the TensorFlow.jl package.
  • On OS X, rename the file to libtensorflow.dylib.

A convenience script is included to use Docker to easily build the library. Just install docker and run julia build_libtensorflow.so from the “deps” directory of the TensorFlow.jl package.