Sunday, December 21, 2014

Abstract Data Structures

What are Abstract Data Types (ADTs)? You can find many definitions but it remains often fuzzy what it actually means and what the concept is good for. For instance, one could define an ADT as follows:

An Abstract Data Type is a formal definition of the behavior of a data object with time (and space) complexity guarantees but no implementation details.

Doesn't help much, does it? Is an abstract class an ADT ... possibly. Does an array and a bunch of functions constitute an ADT ... could be. Let us take a concrete example and play with it. The standard example for an ADT is a stack, which is defined as follows:

 
ADT: Stack
operations:        complexity:
  push(item)         O(1)
  item <- pop()      O(1)
behavior:
  Last-in-first-out (LIFO)

It has a push operation to put an item on the stack and a pop operation to remove an item from the stack. Both operations take constant time and have LIFO behavior. A typical Python implementation of a stack in object oriented style could be this:

  
# object oriented
class Stack:
    def __init__(self) :
        self.items = []
        
    def push(self, item) :
        self.items.append(item)
        
    def pop(self) :
        return self.items.pop()

Since list.append() and list.pop() are O(1) operations in Python (see here) the above stack.push() and stack.pop() operations also take constant time. Now we can add and remove items to the stack in constant time and LIFO order. For instance,

  
stack = Stack()
for i in range(5):
    stack.push(i)
for i in range(5):
    print(stack.pop(), end=" ")

will print out:

4 3 2 1 0 

Instead of a Python list, which actually is a dynamically growing array, we could have used a linked list, an array of fixed size, or any other data structure to implement the stack. As long as the implementation has O(1) push and pop operations with LIFO behavior it does not matter. We also can implement the stack type in an iterative or functional style. For the imperative style, we simply take a Python list and define push and pop functions:

  
# imperative
def push(stack, item):
    stack.append(item)
def pop(stack):
    return stack.pop()

stack = []
push(stack, 1)
push(stack, 2)
item = pop(stack)

Functional programming tend to prefer immutable data structures. Let's use immutable tuples to implement a stack:

  
# functional
def push(stack, item):
    return stack + (item,)
def pop(stack):
    return stack[-1], stack[:-1]

stack = push(push((), 1), 2)
item, stack = pop(stack)

These are all implementations of the ADT "stack". Since Python already has a list with append and pop methods this is trivial. Let us do something a bit more interesting. A common interview question is to implement a queue using stacks as data structures only. A queue is another common ADT that is defined as follows:

 
ADT: Queue
operations:         complexity:
  enqueue(item)       O(1)
  item <- dequeue()   O(1)
behavior:
  First-in-first-out (FIFO)

The key insight is that a stack reverses the order of the items (LIFO) and that reversing twice gives you the FIFO behavior required for a queue. We will use an instack that stores all enqueued items, and an outstack from where elements are dequeued. To achieve the required FIFO behavior of the queue, we move all items from the instack to the outstack (using push and pop operations) whenever dequeue() is called and the outstack is empty:

  
class Queue:
    def __init__(self) :
        self.instack = Stack()
        self.outstack = Stack()
        
    def enqueue(self, item) :
        self.instack.push(item)
        
    def dequeue(self) :
        if self.outstack.isEmpty():
            while not self.instack.isEmpty():
                self.outstack.push(self.instack.pop())
        return self.outstack.pop()

You noticed that I am using the, so far undefined, method isEmpty() to test if a stack is empty. It is not essential, and we could have counted the items added/removed from the stacks to achieve the same, but most stack implementations will provide an isEmpty() method for convenience (see below).

  
class Stack:
    ...
    def isEmpty(self):
        return self.items == []

What is important, however, is that isEmpty() is also an O(1) operation. Now, let's check if our queue implementation actually follows the ADT specification. The enqueue method only calls the push method, which is an O(1) operation and therefore enqueuing an item is also a constant time operation as required. The dequeue method is a bit more complex. Let's think about the simple case of enqueuing and then immediately dequeuing items. In this case, we call outstack.isEmpty(), outstack.isEmpty(), outstack.push(instack.pop()) and outstack.pop() once, and since they are all O(1) operations, dequeuing is an O(1) operation as well.

Now for the more complex case. Assume we are enqueuing n items on the instack; that is of complexity O(1)*n = O(n). If we dequeue those n items, we run the while loop for the first dequeuing call, moving all n items from the instack to the outstack. That is an O(n) operation that occurs only once. We call outstack.pop() every time; this is another O(n) operation. Together we have (O(n) + O(n))/n = O(n)/n and the amortized time complexity is O(1).

In what way did the concept of ADTs help here? When implementing the queue and analyzing its time complexity, we did not have to think about the implementation of the stack. It simplified the reasoning and implementation, being able to rely on guaranteed O(1) complexity of all stack operations.

Sunday, November 30, 2014

Construct binary search tree from sorted numbers

A not uncommon interview question is to construct a binary search tree from a list of sorted numbers. This is actually very simple. Let us start with the definition of the data structure. The following class Node has a value and a left and right sub-tree.

  
class Node(object):
    def __init__(self, value, left=None, right=None):
        self.value = value
        self.left = left
        self.right = right

We now can create a tree as follows:

  
tree = Node(1, Node(11, Node(111), Node(112)), Node(12))

To print out a tree we define a method show that recursively traverses the tree nodes and prints out the node values, indented by their tree depths:

  
def show(node, depths=0):
    if not node: return
    print " "*depths + str(node.value)
    show(node.left, level+2)
    show(node.right, level+2)

If we run show(tree), we get the following output:

  
1
  11
    111
    112
  12

Now everything is in place. The following function construct takes a list of sorted numbers and recursively creates the binary search tree.

  
def construct(numbers):
    if not numbers: return None
    mid = len(numbers) // 2
    return Node(numbers[mid], 
                construct(numbers[:mid]), 
                construct(numbers[mid+1:]))

The first line checks whether the numbers list is empty. In this case we are done and returning None. Otherwise we find the index mid of the element in the middle of the numbers list (This is Python 3k and we perform an integer division with '//'. In Python 2.x a simple '/' would do). Now comes the elegant part.

To construct the tree, we create and return a node that contains the number at the middle position as value (*numbers[mid]). The left sub-tree of the node is created by calling construct for the numbers to the left of the middle element (numbers[:mid]) and the right sub-tree is created by calling construct for the numbers to the right of the middle element (numbers[mid+1:]). That's it! Let's try it out:

  
numbers = [1,5,7,10,40,50]      
show( construct(numbers) )  

10
  5
    1
    7
  50
    40

Too easy ;)

Saturday, November 29, 2014

Class for nucleotide sequences with compression

DNA sequences are composed of four nucleotides (Adenine = A, Cytosine = C, Thymine = T, Guanine = G). A common representation of a DNA sequence is a simple string, which is fine for short sequences but becomes memory-intensive for complete genomes. For instance, the human genome consists of about 3.2 billion base pairs, which equates to roughly 3GB if each nucleotide is stored as a byte. However, genomes contain various types of repeated subsequences, and repetitive DNA sequences comprise approximately 50% of the human genome.

While there are many ways to take advantage of the redundant nature of genomic sequences, here I want to play with a simple implementation of a DNA sequence class in Scala that aims to store sequence data more efficiently by performing a run-length encoding. Apart from more complex repeats of sub-sequences, DNA sequences also contains many stretches of repeated nucleotide symbols. Here a fragment of Escherichia coli:

TGGAAAGCAATGCCAGGCAGGGGCAGGTGGCCACCGTCCTCTCTGCCCCCG...

Instead of storing the sequence as a string with repeated characters a run-length encoding would store each character and the number of times it appears. This way a sequence such as GGCAGGGG could be represented as (G,2)(C,1)(A,1)(G,4). Since 4 nucleotides can be represented by 2 bits and most stretches of repeated characters are short (a 6 bit number should do), one could pack a run into one byte. But here I want to keep things super simple and represent a run by a case class

  
case class Run(symbol:Char, location:Int)

where symbol is the nucleotide symbol and location is the position of the first character of the run. Why storing the location of the nucleotide and not the number of repeats as suggested above? The reason is that we want to have random access on the encoded sequence. We need to be able to efficiently retrieve a symbol at a specific location in the genome. A run-length encoding would require a linear search of the runs to find the run that contains the location. Here we store the location directly and can infer the length of the run from the difference in the locations of two subsequent runs. To give an example: the sequence

symbols:    GGCAGGGG
locations:  01234567

would be encoded as Run(G,0)Run(C,2)Run(A,3)Run(G,4). Before we look at the Scala code for the encoding let us add one refinement. You will notice that for the last run we actually don't know the run length, since there is no subsequent run with start position we could subtract from. To avoid this problem we add a terminator symbol '$' to the end of the sequence

symbols:    GGCAGGGG$
locations:  012345678

and the encoding becomes Run(G,0)Run(C,2)Run(A,3)Run(G,4)Run($,8). Here the implementation of the function to encode a DNA sequence:

  
import scala.collection.immutable.Seq

def encode(dna: Iterable[Char]) = {    
  var runs = Seq(Run(dna.head, 0))
  for ((s, p) <- (dna ++ Iterable('$')).zipWithIndex if runs.last.symbol != s) 
    runs :+= (Run(s, p))
  runs
}

We will store a collection of runs as an immutable sequence and therefore import immutable.Seq. The encode function takes an Iterable over characters, which allows us to read a large genome from file without actually having to read it completely into memory first. The collection of runs is initialized with the first DNA symbol (dna.head) at location zero. Biologists actually use locations starting from 1 but we avoid this additional complexity here.

Before looping over the DNA symbols we add the sequence terminator '$' to the iterator (dna ++ Iterable('$')) and then zip with the symbol locations. In each iteration we test whether the last symbol in the runs collections is different from the current symbol (runs.last.symbol != s) and if that is the case, add a new run to the collection (runs :+= (Run(s, p)).

Using a for-loop to construct the sequence of runs is not really functional style. We could use fold to achieve the same but the code becomes essentially unreadable:

  
def encode(dna: Iterable[Char]) = {    
  (dna ++ Iterable('$')).zipWithIndex.foldLeft((End, 0, Seq(Run(seqdnahead, 0))))((o, e) => 
     if (o._1 == e._1) o else (e._1, e._2, o._3 :+ Run(e._1, e._2)))._3
}

While this can be decomposed to make it a bit more readable, it remains a challenge to the eye:

  
def encode(dna: Iterable[Char]) = {    
  val runs = Seq(Run(dna.head, 0))
  def acc(o: (Char, Int, Seq[Run]), e: (Char, Int)) = (o,e) match {
     case ((os, ol, or), (s,l)) => if (os == s) o else (s, l, or :+ Run(s, l))      
  }            
  (dna ++ Iterable('$')).zipWithIndex.foldLeft((End, 0, runs))(acc)._3
}

Let us stick with the most readable version and implement a class to create encoded DNA sequences:

  
import scala.collection.immutable.Seq

class DNA(protected val runs: Seq[Run])

object DNA {
  private val End = '$'
    
  def apply(dna: Iterable[Char]) = {    
    var runs = Seq(Run(dna.head, 0))
    for ((s, p) <- (dna ++ Iterable(End)).zipWithIndex if runs.last.symbol != s) 
      runs :+= (Run(s, p))
    new DNA(runs)    
  }
} 

Now we can create a DNA sequence by calling DNA('GGCAGGGG'). In the next step we make the class an Iterable[Char], which will come in handy later. We need to implement two methods: length and iterator:

  
class DNA(protected val runs: Seq[Run]) extends Iterable[Char] {
  def length: Int = runs.last.location
  
  override def iterator = 
    for (i <- (0 to runs.length-2).iterator; 
         _ <- runs(i).location until runs(i+1).location) yield runs(i).symbol

}

As you can see, length is easy. We simply take the location of the last run, which is equivalent to the length of the decoded sequence. The iterator is a bit more tricky. We have an outer loop over all runs except the last one and convert to an iterator ((0 to runs.length-2).iterator) because we need to return an iterator. The inner loop returns the repeated symbols and their number is inferred from the location difference of the current and the next run (runs(i).location until runs(i+1).location).

With an iterator over the sequence symbols at hand, implementing the toString method becomes trivial:

  
class DNA(protected val runs: Seq[Run]) extends Iterable[Char] {
  ...
  override def toString() = iterator.mkString
}

We can also very easily concatenate DNA sequences

  
class DNA(protected val runs: Seq[Run]) extends Iterable[Char] {
  ...
  def ++(seq: DNA) = new DNA(runs ++ seq.runs)
}

Note that concatenation returns a new DNA sequence and not a string. This is already very nice and we can now create, concatenate and print sequences

  
print(DNA('ACTG') ++ DNA('TTGAGG'))

but one fundamental method is still missing. We want to be able retrieve a symbol at an arbitrary location of the DNA sequence. We could iterate over the runs, find the run that contains the requested location and print out its symbol

  
class DNA(protected val runs: Seq[Run]) extends Iterable[Char] {
  ...
  def apply(location: Int): Char = {
    for (i <- (1 to runs.length-1)) 
      if(runs(i).location > location) 
        return runs(i-1).symbol
    DNA.End 
  }  
}

but it would be of O(n) complexity. However, since the collection of runs is sorted in increasing order of location we can employ binary search and improve to order O(log n). First we implement a method runIndex that returns the index of the run that contains the location (or the last run) and with that the implementation of apply is simple

  
private def runIndex(location: Int) = {
  def bsearch(lo: Int, hi: Int): Int = {
    if (lo > hi) return hi
    val mid = lo + (hi - lo) / 2
    runs(mid) match {
      case Run(_, l) if (l == location) => mid
      case Run(_, l) if (l < location) => bsearch(mid + 1, hi)
      case Run(_, l) if (l > location) => bsearch(lo, mid - 1)
    }
  }
  bsearch(0, runs.length - 1)
}
  
def apply(location: Int): Char = runs(runIndex(location)).symbol

Also beautifully simple is now the implementation a method to retrieve a slice of the DNA sequence

  
override def slice(start: Int, end: Int) = DNA(start until end map apply)

However, the method is rather inefficient since it performs a binary search for each element of the slice, creates a string and the converts the string to runs again. It would be more efficient to find the first and the last run of the slice, take the runs between and create a new DNA sequence from that. Here we go:

  
override def slice(start: Int, end: Int) = {
  var runsSlice = Seq[Run]();
  val (first, last) = (runIndex(start), runIndex(end))
  runsSlice :+= Run(runs(first).symbol, 0)
  for (i <- first+1 to last; run = runs(i)) 
    runsSlice :+= Run(run.symbol, run.location-start)
  runsSlice :+= Run(DNA.End, end-start+1)  
  new DNA(runsSlice);
}  

Note that we have to correct the first and last run in case that the start or end position of the slice does not match the run location exactly. Finally let us glue it all together to make it a complete piece of code:

  
import scala.collection.immutable.Seq

case class Run(symbol:Char, location:Int)

class DNA(protected val runs: Seq[Run]) extends Iterable[Char] {
  
  private def runIndex(location: Int) = {
    def bsearch(lo: Int, hi: Int): Int = {
      if (lo > hi) return hi
      val mid = lo + (hi - lo) / 2
      runs(mid) match {
        case Run(_, l) if (l == location) => mid
        case Run(_, l) if (l < location) => bsearch(mid + 1, hi)
        case Run(_, l) if (l > location) => bsearch(lo, mid - 1)
      }
    }
    bsearch(0, runs.length - 1)
  }
  
  def apply(location: Int): Char = runs(runIndex(location)).symbol
  
  def length: Int = runs.last.location
  
  override def iterator = 
    for (i <- (0 to runs.length-2).iterator; 
         _ <- runs(i).location until runs(i+1).location) yield runs(i).symbol
         
  def ++(seq: DNA) = new DNA(runs ++ seq.runs)
    
  override def slice(start: Int, end: Int) = {
    var runsSlice = Seq[Run]();
    val (first, last) = (runIndex(start), runIndex(end))
    runsSlice :+= Run(runs(first).symbol, 0)
    for (i <- first+1 to last; run = runs(i)) 
      runsSlice :+= Run(run.symbol, run.location-start)
    runsSlice :+= Run(DNA.End, end-start+1)  
    new DNA(runsSlice);
  }  
    
  override def toString() = iterator.mkString
}


object DNA {
  private val End = '$'
    
  def apply(dna: Iterable[Char]) = {    
    var runs = Seq(Run(dna.head, 0))
    for ((s, p) <- (dna ++ Iterable(End)).zipWithIndex if runs.last.symbol != s) 
      runs :+= (Run(s, p))
    new DNA(runs)    
  }
}

Is the run-length encoding implemented here actually saving any memory? In most cases probably not! This is a simplistic, toy implementation and the fraction of longer repetitive stretches of DNA symbols is not that large. It is possible to create longer stretches of repeated nucleotides using the Burrows–Wheeler transform that can be computed in linear time. And should you find the binary search too slow you could try an interpolation search. Finally, there are more advanced, more efficient, but also more complex compression methods for DNA sequences available (see here).

Thursday, November 27, 2014

Longest word

I recently came across this post, where Joey Gibson presents two implementations of a function to find the longest word in an array of strings. The procedural implementation, which often is what you start with when converting from Java to Scala, looks like this:

def longestWord(words: Array[String]) = {
  var word = words(0)
  var idx = 0
 
  for (i <- 1 until words.length)
    if (words(i).length > word.length) {
      word = words(i)
      idx = i
    }
 
  (word, idx)
}

It is a bit nicer than a corresponding implementation in Java but doesn't buy you much. The functional version Joey presented, is much shorter:

def longestWord(words: Array[String]) =
  (("", -1) /: words.zipWithIndex) {(old, cur) =>
    if (cur._1.length > old._1.length) cur
    else old
  }
}

It always gives me a kick to see how much shorter and more readable functional/Scala code is in comparison to procedural/Java code. In this case, however, readers rightfully argued that this isn't very readable. But luckily we can make it readable and even shorter! Firstly, instead of using a fold left (/:), we could use a reduceLeft

  def longestWord(words: Iterable[String]) = {
    words.zipWithIndex.reduceLeft{(a,b) => 
      if (a._1.length > b._1.length) a else b}
  }  
}

which already is an improvement in readability. Note that I also changed the parameter type from Array[String] to Iterable[String]. This way the function works for lists, arrays, ..., anything Iterable.

Now I am getting greedy. I am not sure if the maxBy function was around in 2010 when Joey wrote his post, but these days we can simply write

  def longestWord(words: Iterable[String]) = 
    words.zipWithIndex.maxBy(_._1.length)

which is super short and very readable. If you don't like the tuple notation then this version is a bit more explicit and the most readable, I believe:

  def longestWord(words: Iterable[String]) = 
    words.zipWithIndex.maxBy{case (word,_) => word.length}

Of course, if you only want the longest word (and not its index) words.maxBy(_.length) will do the job.