Tag Archives: Numba

What’s faster in Numba @jit functions, NumPy or the math package?

Update 2016-01-16: Numba 0.23 released and tested – results added at the end of this post

A while back I was using Numba to accelerate some image processing I was doing and noticed that there was a difference in speed whether I used functions from NumPy or their equivalent from the standard Python math package within the function I was accelerating using Numba. If memory serves, I was using the exp function for something and noticed that replacing numpy.exp with math.exp in the function I had decorated with @jit made a noticeable difference in running time. I didn’t investigate this any further at the time, but now, several versions of Numba and NumPy later, I wanted to find out what was causing this difference and what the current status was in terms of which is faster to use. Continue reading

Numba nopython mode in versions 0.11 and 0.13 of Numba

My previous posts regarding the Numba package for Python used version 0.11.  Recently, Numba has gone through some major changes in version 0.12.1, 0.12.2 and 0.13.  My last post explained how I had used the nopython keyword argument to speed up my code.  Importantly, it showed that removing array allocation steps (i.e. np.zeros(…)) allowed Numba 0.11 to automatically generate code that did not use the Python C API.  I decided to see what the current state of affairs is with version 0.13.  The release notes for Numba are available here. Continue reading

Learning Python: Numba nopython context for extra speed

Update 2014/12/23: I should have pointed out long ago that this post has been superseded by my post “Numba nopython mode in versions 0.11 and 0.13 of Numba“.

Lets say you are trying to accelerate a Python function whose inner loop calls a Numpy function, in my case that function was exp.  Using the @autojit decorator from Numba should give you good results.  It certainly helped make my function faster, but I felt that more speed was hiding somewhere.  This post explores how I got back that speed.  Today’s code is available as an IPython notebook here: 2014-02-01-LearningPython-NumbaNopythonContext.ipynb.  First, I tested my belief by timing three ways to calculate exp of each entry in a large NumPy array.  This is the code for the functions: Continue reading

Corrections to “Learning Python: Eight ways to filter an image” – fixing my Numba speed problems

My post comparing different ways to implement the bilateral filter, “Learning Python: Eight ways to filter and image”, showed several versions attempting to use Numba.  However, Numba failed to produce the amazing speed ups others have reported.  The fault is my own.  Here is the new version 2 of my Numba code: Continue reading

Learning Python: Eight ways to filter an image

Today’s post is going to look at fast ways to filter an image in Python, with an eye towards speed and memory efficiency.  My previous post put Numba to use, accelerating my code for generating the Burning Ship fractal by about 10x.  This got me thinking about other places where I could use Numba.  That, combined with reading some SciPy and scikit-learn documentation got me onto the topic of filtering an image.  I’m going to focus on 2D gray-scale images for this post.  Oddly enough, there isn’t a bilateral filter implemented in scipy.ndimage so I’m going to tackle that one.  My test image is the famous Lenna Continue reading

Learning Python: Parallel processing with the Parallel Python module, with some Numba added in

Introduction: Parallel Python and the Burning Ship fractal

I have previously used Matlab for a lot of my prototyping work and its parfor Parallel For loop construct has been a relatively easy way to get code to use all the cores available in my desktop.  Now that I am teaching myself Python, I decided to look for something similar.  My first stop is the Parallel Python module a.k.a. PP.  One thing I like about PP is that it can also run on multiple computers. Continue reading