If, like me, you're looking for a C/C++ method and think that TF Serving is overkill, I couldn't find an abolutely guaranteed route to success. However, the best seems to be to convert to ONNX format and use an ONNX runtime to use the model for inference. Part 2 of this series of posts will cover my attempts to create a tutorial on how to do this.
Update 2016-01-16: Numba 0.23 released and tested - results added at the end of this post A while back I was using Numba to accelerate some image processing I was doing and noticed that there was a difference in speed whether I used functions from NumPy or their equivalent from the standard Python math package … Continue reading What’s faster in Numba @jit functions, NumPy or the math package?
Statistics South Africa (Stats SA) is the goverment-run statistician in South Africa. They publish a lot of stats about SA, you can find them here: http://www.statssa.gov.za/. I've decided to start doing some analyses of the data they make available for the public to download. My first start is writing code to load the data they … Continue reading Analysing data from Stats SA
I've been using pytest for a few months now to help me to test individual functions and algorithms as I develop them. So far I've been impressed by how easy it is to setup my tests. Reading some of my previous posts would tell you that I like optimising code for speed (perhaps a little too much for … Continue reading Testing and profiling Python code simultaneously using pytest and cProfile
My previous posts regarding the Numba package for Python used version 0.11. Recently, Numba has gone through some major changes in version 0.12.1, 0.12.2 and 0.13. My last post explained how I had used the nopython keyword argument to speed up my code. Importantly, it showed that removing array allocation steps (i.e. np.zeros(...)) allowed Numba … Continue reading Numba nopython mode in versions 0.11 and 0.13 of Numba
Update 2014/12/23: I should have pointed out long ago that this post has been superseded by my post "Numba nopython mode in versions 0.11 and 0.13 of Numba". Lets say you are trying to accelerate a Python function whose inner loop calls a Numpy function, in my case that function was exp. Using the @autojit … Continue reading Learning Python: Numba nopython context for extra speed
My post comparing different ways to implement the bilateral filter, "Learning Python: Eight ways to filter and image", showed several versions attempting to use Numba. However, Numba failed to produce the amazing speed ups others have reported. The fault is my own. Here is the new version 2 of my Numba code: The first, and … Continue reading Corrections to “Learning Python: Eight ways to filter an image” – fixing my Numba speed problems
Today's post is going to look at fast ways to filter an image in Python, with an eye towards speed and memory efficiency. My previous post put Numba to use, accelerating my code for generating the Burning Ship fractal by about 10x. This got me thinking about other places where I could use Numba. That, … Continue reading Learning Python: Eight ways to filter an image
Introduction: Parallel Python and the Burning Ship fractal I have previously used Matlab for a lot of my prototyping work and its parfor Parallel For loop construct has been a relatively easy way to get code to use all the cores available in my desktop. Now that I am teaching myself Python, I decided to … Continue reading Learning Python: Parallel processing with the Parallel Python module, with some Numba added in
PyOpenCL Image objects take a shape tuple that gives (width, height, depth), but NumPy arrays specify shape in the order (rows, columns, ...) a.k.a. (height, width, ...) where the ellipsis indicates higher dimensions. What's important is that the width and height dimensions have been swapped. The PyOpenCL documentation suggests creating the NumPy arrays in the … Continue reading Making PyOpenCL handle NumPy arrays as images