This article is a development article and the audience is moderate to advanced Android developers. If you are not an Android developer, or a Tensorflow developer, this is probably not going to be super helpful for you.
There are many things that people don’t recommend that you do, and then there are the things that you have to do because it makes the most logical sense given the current state of the industry. One of these things is use Tensorflow Lite with the NDK so that you can avoid re-writing your pre and post-processing code, which is often prone to bugs and can cause errors with your application.
The first thing that you need to do to setup your environment. I would recommend downloading Android Studio, and after it is installed, adding the NDK (Side-By-Side)
The next thing that you will need is bazel. Each version of Tensorflow needs a different version of bazel, so it’s important that you have a version that can work with it. I would recommend v1.1.0, which is what currently works with master at the time of writing this blog post.
Next, you should get Tensorflow, to do this you can git clone the tensorflow repo:
git clone https://github.com/tensorflow/tensorflow.git
Once you cloned this, you will then run configure, and go through the basic tensorflow configuration steps, once you are done with this, you will then need to setup your Android environment. You will provide the paths to both your Android SDK and NDK. Tensorflow will complain about how it does not support NDK v20, but that can safely be ignored.
Once this is done, you can run bazel and you’ve run through the configure, you can then build for 64 bit Android ARM by doing this:
bazel build --config android_arm64 tensorflow/lite:libtensorflowlite.so
The current configurations include android_arm64, android_arm, android_x86 and android_x86_64. The last two are needed if you plan on doing demonstrations with the Android emulator, which is useful for doing code workshops or presentations where you can’t use something like Vysor to demonstrate that you are livecoding and not just trying to scam the audience.
Now, if you want to build a GL delegate, you can do this by doing the following, go into the tensorflow/lite/delegates/gpu directory, and run this command for the OpenGL delegate:
bazel build -c opt --config android_arm64 --copt -Os --copt -DTFLITE_GPU_BINARY_RELEASE --copt -fvisibility=hidden --linkopts -s --strip always :libtensorflowlite_gpu_gl.so
You can build other delegate code from this. The binaries will be found in the bazel_out directory in the root directory of your tensorflow repo in their respective architecture paths. You can then copy them out into something more digestible, and if you wish also copy the headers out of Tensorflow for your development if you don’t want to include the Tensorflow repository in your build environment. (Trust me, you don’t)
Now, there is probably a way to get this all into a nice AAR like the newly announced TFLite Qualcomm Hexagon DSP delegate code, but AARs are somewhat tricky and there’s not good documentation on how to do this. In fact, we’re in 2020 and I can probably still find professional Android developers who don’t even know what AARs even are, but that is the topic of another post.
Hopefully this helps you if you’re trying to connect Halide, OpenCV or another C++ library with TFLite and don’t want to bounce across the JNI boundary all day. This is really just a quick post for myself that I’m going to go back to regularly, because I often forget the right bazel commands.