Saving and loading TensorFlow neural networks Part 2: ONNX to the rescue

The TensorFlow logo

Welcome back to my attempts to save a trained TensorFlow model in Python and load it in C/C++. Part 1 documented how I kept running into that word deprecated in the TensorFlow library. The conclusion was that the SavedModel format was going to remain in TensorFlow 2.0, but all the functions in TensorFlow 1.x for … Continue reading Saving and loading TensorFlow neural networks Part 2: ONNX to the rescue

AMD EPYC Rome CPUs – 64 cores, wow

The AMD EPYC CPU without heat sink, showing the individual chiplets.

Just a few years ago, no CPU had 64 cores. Having 6 cores and 12 threads was amazing. There was the Xeon Phi though. Way back in 2006 Intel started it's Larrabee project, eventually releasing the Intel Many Integrated Core prototype in 2010. According to Wikipedia, this processor had 32 cores and supported 4 threads … Continue reading AMD EPYC Rome CPUs – 64 cores, wow

Artificial General Intelligence – in my lifetime or not?

When do you think we will have fully functional robotic cats with the same brain power as a real felines?

Recently there's been several neuromorphic chips or other AI chips that have been released. So, how long before the no. of neurons on a single chip equals that within the human brain? Yes, I realize that our current artificial neurons are not as advanced as our biological ones, but I'll use this as an approximation … Continue reading Artificial General Intelligence – in my lifetime or not?

Saving and loading TensorFlow neural networks Part 1: Everything’s deprecated, what now!

The TensorFlow logo

If, like me, you're looking for a C/C++ method and think that TF Serving is overkill, I couldn't find an abolutely guaranteed route to success. However, the best seems to be to convert to ONNX format and use an ONNX runtime to use the model for inference. Part 2 of this series of posts will cover my attempts to create a tutorial on how to do this.

Portable note-taking tools for ideation and pretty much everything else – suggestions welcome

A problem I run into is what app to use to take notes. Many years ago, I used Evernote. What turned me away was a bug on Android that made it impossible to load long notes on my tablet. Since then, I've tried Boostnote and am now giving Dropbox Paper a try. I'm open to … Continue reading Portable note-taking tools for ideation and pretty much everything else – suggestions welcome

Autotuning OpenCL kernels – CLTune on Windows 7

The OpenCL logo.

CLTune is a C++ library for automatically tuning OpenCL kernels to extract the maximum speed from your device. I'm going to try building and using it on Windows 7 with MinGW-w64 (GCC 4.9.1) to see what I can achieve with it. While properly written OpenCL code should work on any conformant device and platform, there's … Continue reading Autotuning OpenCL kernels – CLTune on Windows 7

What’s faster in Numba @jit functions, NumPy or the math package?

Update 2016-01-16: Numba 0.23 released and tested - results added at the end of this post A while back I was using Numba to accelerate some image processing I was doing and noticed that there was a difference in speed whether I used functions from NumPy or their equivalent from the standard Python math package … Continue reading What’s faster in Numba @jit functions, NumPy or the math package?

Analysing data from Stats SA

Statistics South Africa (Stats SA) is the goverment-run statistician in South Africa. They publish a lot of stats about SA, you can find them here: http://www.statssa.gov.za/. I've decided to start doing some analyses of the data they make available for the public to download. My first start is writing code to load the data they … Continue reading Analysing data from Stats SA