Optimization: Picking random keys from a dictionary and mutatingvalues

B

blaine

Hey everyone,
Just a friendly question about an efficient way to do this. I have
a graph with nodes and edges (networkx is am amazing library, check it
out!). I also have a lookup table with weights of each edge. So:

weights[(edge1, edge2)] = .12
weights[(edge2, edge5)] = .53
weights[(edge5, edge1)] = 1.23
weights[(edge3, edge2)] = -2.34
etc.

I would like to take this weight table and subject it to evolutionary
mutations. So given a probability p (mutation rate), mutate edge
weights by some random value. So in effect, if p is .25, choose 25%
random edges, and mutate them some amount. So:
1. Whats a good way to get the keys? I'm using a loop with
random.choice() at the moment.
2. Any comments on how to get a 'fair' mutation on an existing edge
value?

I currently am doing something like this, which seems like it leaves
something to be desired.

import random
weights = generateweights() # generates table like the one above
p = 0.25
v = random.betavariate(2, 10)
num = int(v*len(weights)) # How many weights should we mutate?
keys = w.keys()
for i in xrange(num):
val = random.choice(keys) # Choose a single random key
w[val] = w[val]*(random.random()*5-1) # Is this a 'good' way to
'mutate' the value?

This is an evolutionary search, so mutate in this sense relates to a
genetic algorithm, perhaps a gradient decent?
Thanks!
Blaine
 
R

Raymond Hettinger

1.  Whats a good way to get the keys? I'm using a loop with
random.choice() at the moment.

Why not keep a separate list of keys and use random.sample()?


Raymond
 
C

Carl Banks

Hey everyone,
Just a friendly question about an efficient way to do this.

Friendly advance reminder: in many optimization problems, the
objective function is far more expensive to calculate than the
optimizing procedure. Your time is usually better spent optimizing
the objective function, or tuning the optimizer.

But what they hey.
I have
a graph with nodes and edges (networkx is am amazing library, check it
out!). I also have a lookup table with weights of each edge. So:

weights[(edge1, edge2)] = .12
weights[(edge2, edge5)] = .53
weights[(edge5, edge1)] = 1.23
weights[(edge3, edge2)] = -2.34
etc.

I would like to take this weight table and subject it to evolutionary
mutations. So given a probability p (mutation rate), mutate edge
weights by some random value. So in effect, if p is .25, choose 25%
random edges, and mutate them some amount. So:
1. Whats a good way to get the keys? I'm using a loop with
random.choice() at the moment.

That's probably the best way.

You might be able to squeeze a little more speed out of it by using a
partial random shuffle as opposed to removing the item from the list.
This will minimize the amount of copying. Here's an example (without
error checking):

def partial_random_shuffle(lst,nshuffled):
for i in xrange(nshuffled):
j = random.randrange(i,len(lst))
t = lst
lst = lst[j]
lst[j] = t

The first nshuffled items of lst upon return will be the selected
items. Note: This might also be slower than removing items. It
should scale better, though.

2. Any comments on how to get a 'fair' mutation on an existing edge
value?

random.normalvariate is usually a good choice for selecting random
real-valued increments.

val = random.normalvariate(val,sigma)

where sigma, the standard deviation, is the amount

I currently am doing something like this, which seems like it leaves
something to be desired.

import random
weights = generateweights() # generates table like the one above
p = 0.25
v = random.betavariate(2, 10)
num = int(v*len(weights)) # How many weights should we mutate?
keys = w.keys()
for i in xrange(num):
val = random.choice(keys) # Choose a single random key
w[val] = w[val]*(random.random()*5-1) # Is this a 'good' way to
'mutate' the value?

If you don't remove keys[val], there's a chance you'll mutate the same
key twice, which I doubt is what you want.

This is an evolutionary search, so mutate in this sense relates to a
genetic algorithm, perhaps a gradient decent?

A gradient descent method is not an evolutionary search and involves
no randomness (unless noise is added to the objective, which is a
possible way to attack a function with unimportant small scale
features, and in that case normalvariate would be the thing used).



Carl Banks
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,564
Members
45,040
Latest member
papereejit

Latest Threads

Top