If you haven't checked out Part 1 you probably should start there.
In this second installment of my intermediate Python instructable series, I'll show you but one of nifty things you can do with iterators in a way that's faster and more beautiful than other approaches.
All examples (with more complete code I've omitted here for simplicity) can be found at https://gitlab.matevi.sh/blog-posts/iterators-2/.
There's a good chance you'll encounter this at some point while coding: you've got a list of objects and you want to access them a few at a time, but only advancing one at a time. This sort of iteration pattern is referred to as a rolling window. For example:
>>> list(rolling_window(3, xrange(5))) [(1, 2, 3), (2, 3, 4), (3, 4, 5)]
Rolling windows are especially common with time series — a sequence of data points made over a time interval. Like just about everything else in programming, there is no lack of solutions to the problem.
Tackling it Head On
The naive approach involves a
for loop and array slicing:
def rolling_window(iterable, window_size): """ Return a moving window of size `window_size` over `iterable` """ return [iterable[i:i + window_size] for i in xrange(len(iterable) - window_size + 1)]
All we're doing is taking a series of slices from the input iterable and making a list of out them. For the input
rolling_window(3, range(5)) your output is essentially
[li[0:3], li[1:4], li[2:5]].
Like a lot of Python one-liners, the simplicity is misleading: while it's not a morass of nested list comprehensions, there are a couple of problems, both involving how Python represents the underlying data.
- Array slicing. If you're using CPython3, every time you take a slice of an array, it's getting copied.
- It's all in memory. The first argument of
iterator, would need to support slicing. Much of the time, if an object supports slicing, the underlying data would need to be stored entirely in memory.
The Functional Way
Now lets try to find a solution that doesn't require indexed access to individual items in an iterator. We're going to need two of the functions provided to us by the
itertools module, which is part of the Python standard library,
The first function is
tee(iterable, n=2). Given an iterator, it returns a tuple of
n iterators, each an almost-copy of the input iterator. What
tee really does is keep track of every time one of the 'copied' iterators gets the next element. If that iterator is the furthest ahead out of all of the copied, it also stores the output temporarily until all of the iterators are done iterating over that element. For instance:
from itertools import tee zero_through_nine_iterator = iter(xrange(10)) iter1, iter2 = tee(zero_through_nine, 2) zero_through_four = [next(iter1) for _ in xrange(5)]
Currently, there are 5 values being stored by
tee, namely the first five which
iter1 iterated over once. Once
iter2 iterates over these same values the memoized values can be forgotten.
also_zero_through_four = [next(iter2) for _ in xrange(5)]
Remember: do not use the input iterator after applying
tee needs to track every
next() call to the wrapped iterator so it can memoize output values. For instance:
five = next(zero_through_nine_iterator) six = next(iter1) # returns 6, not five as expected also_six = next(iter2)
One more thing – Introducing iZip
izip is just an iterable
zip. Rather than taking two iterables and returning the zipped result, it will yield one zipped tuple at a time. For instance:
from itertools import izip iter1, iter2 = xrange(1, 3), xrange(3, 5) print zip(iter1, iter2) # [(1, 3), (2, 4)] iter1, iter2 = xrange(1, 3), xrange(3, 5) izipped = izip(iter1, iter2) print next(izipped) # (1, 3) print next(izipped) # (2, 4)
Again, we do this to avoid storing the complete output in memory. The space used is constant, rather than N (assuming we don't need to store references to the output of
All Together Now
from itertools import tee, izip def rolling_window(iterable, window_size=2): """ Create an iterable moving window of `window_size` over `iterable` """ iterables = tee(iterable, window_size) # Step each tee'ed iterator # e.g. the first iterator starts on element 1, the second starts on element 2, etc. for i in xrange(1, window_size): for iterable in iterables[i:]: next(iterable, None) # Zip together the offset iterators. # e.g (0, 1, 2) then (1, 2, 3) return izip(*iterables)
Is it faster?
Yup, it's almost twice as fast to run. More to the point, if we used the iterator version of
rolling_window as intended (rather than applying
list), doing our operations on it's elements one at a time, it would occupy a constant amount of memory.
>>> timeit("list(rolling_window(xrange(100000), 10))", setup="from itertools import tee, izip \ def rolling_window(iterable, size): \ iterables = tee(iterable, size) \ for i in xrange(1, size): \ for iterable in iterables[i:]: \ next(iterable, None) \ return izip(*iterables)", number=1000) 18.346065998077393 >>> timeit("rolling_window(range(100000), 10)", setup="def rolling_window(iterable, window_size): \ return [iterable[i:i + window_size] for i in xrange(len(iterable) - window_size + 1)]", number=1000) 32.78852891921997
What about Yield?
Yield has been a part of Python since version 2.3 and in its current from since 2.5. I'm not going to go fully into it here, because https://wiki.python.org/moin/Generators does a much better job explaining it. Go read it if you're unfamiliar with generators – I'll wait for you.
Alright let's get to it.
Rolling windows could be implemented with yield as follows:
def rolling_window(iterable, window_size): for i in xrange(len(iterable) - window_size + 1): yield iterable[i:i + window_size]
Rather than building up the entire list of slices in the naive example above, we're only returning one slice. Generators, despite some complexity under the hood, are iterators; that is, you can call
next on them.
Do you see any problems with this approach though? We still need to be able to reference sub-elemements of
iterable, which often means building it fully in memory first.
If we can't access all elements of
iterable, we'll need to make sure to hang on to the last |
window_size| elements to build our window, while stepping through the iterable.
We could use a list, but we'd probably want to yield something like a tuple. Why? Tuples in Python are immutable, whereas lists are mutable. For instance, when I say
a = tuple(xrange(5)),
a refers to a structure in memory with the numbers 0-5.
If I was to take a slice of it, for instance
b = a[1:], a new tuple is created in memory for
b. Likewise, if I try to set an element in b:
>>> b = 2 TypeError: 'tuple' object does not support item assignment
Python won't let me mutate (change the state) of b, as it is immutable. Most operations that appear to modify immutable objects, for instance joining two tuples together:
>>> a = tuple(xrange(0,3)) >>> b = tuple(xrange(3,7)) >>> a + b (0, 1, 2, 3, 4, 5, 6)
actually results in a whole new object getting created. While this can be inefficient (creating a copy of a large object can be expensive in both time and space), there are advantages.
Because we're yielding these tuples to other code, if we were to yield a list, a mutable object, the consuming code4 could modify it, and because my iterator is relying on the state of the list to return the next window, future windows might be inaccurate. Sure, we could tell users of our function to not modify the yielded list, but that's just another thing to worry about.
Keeping this in mind, we'll do everything with tuples.
def rolling_window(iterable, window_size): """ >>> list(rolling_window(xrange(4), 2)) [(0, 1), (1, 2), (2, 3)] >>> list(rolling_window(xrange(4), 4)) [(0, 1, 2, 3)] >>> list(rolling_window(xrange(0), 2))  >>> list(rolling_window(xrange(5), 6))  """ iterable = iter(iterable) # Get the first window to return items = tuple(iterable.next() for _ in xrange(window_size)) # If the number of items < window size, we don't have a full window while len(items) == window_size: # Yield and advance the window yield items items = items[1:] + (iterable.next(),)
list is actually significantly faster than the naive list comprehension in both Python 2 and 3 (at least for CPython) ↩
When you call list on your iterable, it's going to negate the memory advantages you might have gained by using iterators in the first place, as Python will construct a list in memory, pointing to each item in the iterable. ↩
Python is a programming language. If you write code in Python, you still need an interpreter to understand and run your code. CPython is one such interpreter. In fact, it's the most popular by far, and chances are you're using it. To read more, check out the StackOverflow discussion on it. ↩
When I say consuming code, I mean the code that takes the items from the iterator and does something with them. ↩
>>> from timeit import timeit >>> timeit('[i for i in xrange(10000)]', number=10000) / timeit('list(xrange(10000))', number=10000) 4.101425225914124 >>> timeit('[i for i in xrange(50000)]', number=10000) / timeit('list(xrange(50000))', number=10000) 4.641357022823394