# Markov chain with extras?

Discussion in 'Python' started by temp@mclift.fsnet.co.uk, May 16, 2005.

1. ### Guest

Hi All,

Could someone show me how to do this?

I want to generate a list using a Markov chain, however, as well as
using the previous two items in the list to decide the current choice I
want the decision to be also dependant on an item at the current
position in another list.

I hope this explains thing clearly enough.

Thanks,

Malcolm

, May 16, 2005

2. ### George SakkisGuest

> Hi All,
>
> Could someone show me how to do this?
>
> I want to generate a list using a Markov chain, however, as well as
> using the previous two items in the list to decide the current choice

I
> want the decision to be also dependant on an item at the current
> position in another list.
>
> I hope this explains thing clearly enough.
>
> Thanks,
>
> Malcolm

What's wrong with keeping explicitly the index of the current position,
say j ? Then you can index the previous two items as chain[j-1],
chain[j-2] and the item in the other list as otherchain[j]. If you're
talking about something else, you need to clarify the problem more.

George

George Sakkis, May 16, 2005

3. ### Guest

Hi,

I think is more easy explained as two linked markov chains. So given
one list the other can be generated.

Thanks,

Malcolm

, May 17, 2005
4. ### tiissaGuest

wrote:
> I think is more easy explained as two linked markov chains. So given
> one list the other can be generated.

'Given one list sounds' like an observation (and this sound like an
order 2 hmm).
But I'm not sure what exactly you want to do with your markov chain. Do
you want the probability distribution at each time step or some value ?
In this case, I'd do something like:

#### hmm.py
from random import random

def St_given_St1_St2_Ot(St1,St2,Ot):
p0=0.5*St1*St2+0.25*St1+0.125*St2+0.0625
return Ot and p0 or 1-p0

def decision(p):
return (random()>p) and 1 or 0

def hmm(LO,LS):
for ot in LO[2:]:
p=St_given_St1_St2_Ot(LS[-1],LS[-2],ot)
LS.append(decision(p))
return LS

LO1=(-1,-1,1,0,0,0,1,1)
LS1=[1,0]
print ''.join(map(str,hmm(LO1,LS1)))

LO2=(0,)*50
LS2=[1,1]
print ''.join(map(str,hmm(LO2,LS2)))

LO3=(1,)*50
LS3=[1,1]
print ''.join(map(str,hmm(LO3,LS3)))
####

Which gives hours of fun looking at random numbers:

\$ python hmm.py
10111101
11111111111111111111111111100000000000001000000000
11011011011011011011010110011011011011011010101101
\$ python hmm.py
10101011
11111111111111111111111111111111000000000000000111
11011010011010100011010110110011001101101101011010
\$ python hmm.py
10100011
11111111111111111111111111111111111111111111111111
11011010110011001101011011011101010110101101101011
\$ python hmm.py
10111101
11101010000000000000000000000000000000000000000001
11011001101101011010110111010101010110110110011101
\$ python hmm.py
10100011
11111111111111111111111111111111111111111111111111
11010110110110110011011010110011010011011011011010
\$

Instead of generating the whole sequence, you can wrap it in an
iterator. And the observations list can also be an iterator (generated
with another chain if you like).

HTH

tiissa, May 17, 2005
5. ### Guest

Hi Tiissa,

I want to use it for music. So given list 1 (melody), list 2 (chords)
could be generated by a Markov chain. Also, given the chords the melody
could be generated again by a chain.

I haven't had time to play around with your code and as I've only been
studying python for about six months I don't quite understand what's
going on. This might already do what I want, I just need to think in
terms of notes and chords.

Thanks,

Malcolm

, May 18, 2005
6. ### tiissaGuest

wrote:
> I want to use it for music. So given list 1 (melody), list 2 (chords)
> could be generated by a Markov chain. Also, given the chords the melody
> could be generated again by a chain.

So, at each time step you want:
- chord(t) given melody(t-1), chord(t-1) and chord(t-2);
- melody(t) given melody(t-1) and chord(t).
(or the other way round)

Is that correct?
If so, one could write:

#### hmm2.py
from random import random

def decision(p):
"""Return a single value according to a probability distribution."""
# trivial due to our binary variables
return (random()>p) and 1 or 0

def ct_given_mt1_ct1_ct2(mt1,ct1,ct2):
"""Chord given past melody and chords."""
p0=0.5*ct1*ct2+0.25*ct1+0.125*ct2+0.0625
return mt1 and p0 or 1-p0

def mt_given_mt1_ct(mt1,ct):
"""Melody given past melody and present chord."""
return 0.1+0.5*mt1+0.3*ct

def timestep(chord,melody):
"""Chose next chord and melody."""
chord.append(decision(
ct_given_mt1_ct1_ct2(melody[-1],chord[-1],chord[-2])))
melody.append(decision(mt_given_mt1_ct(melody[-1],chord[-1])))

chord=[0,1]
melody=[0]

for i in range(20):
timestep(chord,melody)

print "Chord:\t%s"%''.join(map(str,chord[2:]))
print "Melody:\t%s"%''.join(map(str,melody[1:]))
####

This generates some 0-1 string according to the above conditional
distributions.
What's needed there is to have proper variables for melody and chord
with proper domains (instead of only binary values) and proper
probability distributions (and proper decision method also although a
draw is fair enough).

Usually, specifying your distributions is the hard part (some people
actually cheat by having their programs learn these distributions ).

> I haven't had time to play around with your code and as I've only been
> studying python for about six months I don't quite understand what's
> going on. This might already do what I want, I just need to think in
> terms of notes and chords.

Doing Bayesian stuff myself, I'd give mathematical expressions to what I
want. Then implementation follows. But I wouldn't rush into my favorite
editor/python shell before having written some nice equations down.

tiissa, May 18, 2005
7. ### Max MGuest

wrote:
> I want to use it for music. So given list 1 (melody), list 2 (chords)
> could be generated by a Markov chain. Also, given the chords the melody
> could be generated again by a chain.

I have this small module, that can be used for markov chains.

--

hilsen/regards Max M, Denmark

http://www.mxm.dk/

from random import random, randint
from bisect import bisect

class Randomizer:

"Various stochastic methods"

def inverse_scale(self, rng):
"Returns a list of probablitites corresponding to 1/i in range"
return [1.0 / (i + 1.0) for i in range(rng)]

def n_scale(self, rng, n=2.0):
"Returns a list of probablitites corresponding to i/n"
n_scale = []
value = 1.0
for i in range(rng):
value /= n
n_scale.append(value)
return n_scale

class Selector:

"Returns random elements by probability according to their frequency"

def __init__(self, frequencies, elements):
self.sum = sum(frequencies)
acumulated_frequencies = []
acumulated_frequency = 0
for frequency in frequencies:
acumulated_frequency += frequency
acumulated_frequencies.append(acumulated_frequency)
self.decorated = zip(acumulated_frequencies, elements)
self.decorated.sort()

def get(self):
"Randomly returns an element by probability"
rnd = random() * self.sum
index = bisect(self.decorated, (rnd, 0))
return self.decorated[index][-1]

def get_range(self, n):
"Randomly returns a range of elements by probability"
return [self.get() for itm in xrange(n)]

class MarkovIn:

"""
Implements a Markov chain for arbitrary objects. The objects has
same rules as dictionary keys to be valid.
"""

def __init__(self, key_size=1):
"""
key_size: how many steps in the chain
"""
self.key_size = key_size
self.in_history = []
self.chain = {}

def _update_chain(self, obj):
"Puts the object into the chain"
# shortcuts
ch = self.chain
# update the chain
key = tuple(self.in_history)
stat = ch.setdefault(key, {})
ch[key][obj] = stat.setdefault(obj, 0) + 1

def _update_history(self, obj):
his = self.in_history
his.append(obj)
if len(his) > self.key_size:
his.pop(0)

"Adds an object to the chain"
self._update_chain(obj)
self._update_history(obj)

def reset(self):
"Resets the history"
self.in_history = []

class MarkovOut:

"""
A Markov Chain wrapper.
Generates a "random walk" from a markov chain.
It is a seperate object for performance reasons.
"""

def __init__(self, markov):
"""
markov: A populated MarkovIn object
"""
self.markov = markov
self.key_size = markov.key_size
self.out_history = []
# Set up a cache of selector objects
selectors = {}
ch = self.markov.chain
for key in ch.keys():
m = ch[key]
selectors[key] = Selector(m.values(), m.keys())
self.selectors = selectors

def _update_history(self, obj):
his = self.out_history
his.append(obj)
if len(his) > self.key_size:
his.pop(0)

def step(self):
"A 'random' step from the chain, returns an object"
his = self.out_history
key = tuple(his)
obj = self.selectors[key].get()
# keep track of history
self._update_history(obj)
new_key = tuple(his)
self.handle_step(obj)
if not self.selectors.has_key(new_key):
# reset history and start over
self.out_history = []
self.restart()

def handle_step(self, obj):
"Handles a single step"
self.steps.append(obj)

def do_walk(self, steps=1):
"returns A 'probable' walk"
self.steps = []
for itm in xrange(steps):
self.step()

def get_walk(self):
"Returns the walk"
return self.steps

def _reset(self):
"Resets the history"
self.out_history = []

def restart(self):
"A hook for when a walk restarts"
pass

if __name__ == '__main__':

f = open('input.txt')

lines = text.split('\n')

mi = MarkovIn(1)
for line in lines:
mi.reset()
words = line.split()
for word in words:

class mo(MarkovOut):

def __init__(self, markov_out):
MarkovOut.__init__(self, markov_out)

def restart(self):
self.steps.append('\n\n')

m = mo(mi)
m.do_walk(100)
print ' '.join(m.get_walk())

Max M, May 19, 2005
8. ### Guest

Hi Gentlemen,

First off, thanks for the work/time you've put into this - much
appreciated!
Let me play around with the code and I'll get back to you tomorrow.

Malcolm

, May 19, 2005