B
blaine
Hey everyone,
I was hoping to see some people out on the python list that are
familiar with MDP (Modular Toolkit for Data Processing -
http://mdp-toolkit.sourceforge.net/)?
I am wanting to develop a very simple feed forward network. This
network would consist of a few input neurons, some hidden neurons, and
a few output neurons. There is no learning involved. This network is
being used as a gene selection network in a genetic simulator where we
are evolving the weights and connectivity.
There are many different types of nodes listed as being supported,
but i can't figure out the best one to use for this case. In this
situation, we only want to iterate through the network X times. (In
the simples version, with no cycles, this would mean that once the
output nodes are calculated there would be no additional calculations
since the system would be stable and non-learning). Node types are
listed at the bottom here: http://mdp-toolkit.sourceforge.net/tutorial.html#quick-start
In the more complex version, we would have the same model but
instead of having straight connectivity all the way through, we would
add a few cycles in the hidden layer so that a few neurons would feed
back into themselves on the next time step. This could also be
connected to a 'selector' layer, that feeds back on the hidden layer
as well. Since we are only running this a finite number of times, the
system would not spiral out into instability.
Any suggestions for which node types to use, or possibly what other
libraries would be helpful? I realize that due to the relative
simplicity of this network I could hand code this from scratch. MDP
just looks extremely handy and efficient and I'd like to use it if
possible.
Simple Network:
(H is interconnected fully with the Input and Output layer)
I -> H -> O
I -> H -> O
I -> H -> O
Cycled Network (Trying to show that the first hidden neuron is
connected back to itself)
v---|
I -> H -> O
I -> H -> O
I -> H -> O
Complex Network (Trying to show that the first hidden neuron is
connected to another hidden neuron S that connects back to the input
of H. S would be interconnected with the other hidden neurons as
well)
v--- S <--|
I -> H -> O
I -> H -> O
I -> H -> O
Thanks!
-Blaine
I was hoping to see some people out on the python list that are
familiar with MDP (Modular Toolkit for Data Processing -
http://mdp-toolkit.sourceforge.net/)?
I am wanting to develop a very simple feed forward network. This
network would consist of a few input neurons, some hidden neurons, and
a few output neurons. There is no learning involved. This network is
being used as a gene selection network in a genetic simulator where we
are evolving the weights and connectivity.
There are many different types of nodes listed as being supported,
but i can't figure out the best one to use for this case. In this
situation, we only want to iterate through the network X times. (In
the simples version, with no cycles, this would mean that once the
output nodes are calculated there would be no additional calculations
since the system would be stable and non-learning). Node types are
listed at the bottom here: http://mdp-toolkit.sourceforge.net/tutorial.html#quick-start
In the more complex version, we would have the same model but
instead of having straight connectivity all the way through, we would
add a few cycles in the hidden layer so that a few neurons would feed
back into themselves on the next time step. This could also be
connected to a 'selector' layer, that feeds back on the hidden layer
as well. Since we are only running this a finite number of times, the
system would not spiral out into instability.
Any suggestions for which node types to use, or possibly what other
libraries would be helpful? I realize that due to the relative
simplicity of this network I could hand code this from scratch. MDP
just looks extremely handy and efficient and I'd like to use it if
possible.
Simple Network:
(H is interconnected fully with the Input and Output layer)
I -> H -> O
I -> H -> O
I -> H -> O
Cycled Network (Trying to show that the first hidden neuron is
connected back to itself)
v---|
I -> H -> O
I -> H -> O
I -> H -> O
Complex Network (Trying to show that the first hidden neuron is
connected to another hidden neuron S that connects back to the input
of H. S would be interconnected with the other hidden neurons as
well)
v--- S <--|
I -> H -> O
I -> H -> O
I -> H -> O
Thanks!
-Blaine