vhdl code

R

Rash

Hi all,
Am from CS background. Currently am doing my project in vhdl. It is
something like designing a face recognition chip using neural
networks.I got some best weights from matlab neural network toolbox.
The thing is now to store these weight values in a RAM.
Can any one help me how can i store these weight values in RAM, what
size i need, how to code it in VHDL
 
J

JeppeM53

Hi all,
Am from CS background. Currently am doing my project in vhdl. It is
something like designing a face recognition chip using neural
networks.I got some best weights from matlab neural network toolbox.
The thing is now to store these weight values in a RAM.
Can any one help me how can i store these weight values in RAM, what
size i need, how to code it in VHDL

Hi Rash

I believe your planning to run the rocognition in parallel -
But you must realize that RAM memories normally will only allow you to
access one word/byte at a given moment.
Inside a FPGA (specially Xilinx Spartan and Virtex families) can you
find Block RAMs (each block = 16kbit)
They are able to provide data at 1,2,4,9,18 and 36 bit wide data-
busses - moreover are they Dual-ported.

I you put all those block RAMs in parallel will you be able to process
a quite large portion of your
neural network in parallel. (I believe)

If your able to define RAM as distributed memory (LUT-based) will you
come one step closer to the
true parallel circuit - but the hardware cost will be huge.

hope you found this useful

Jeppe
 
Joined
Jan 14, 2009
Messages
5
Reaction score
0
To infer a ram lut based:
Each Slice have two Logic Cell and each logic cell have a LUT (SRAM 16 word x 1 bit)

Code:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_ARITH.ALL;
use IEEE.STD_LOGIC_UNSIGNED.ALL;

entity LUT_RAM is
	port(wr_en,clk: in std_logic;
		  addr: in std_logic_vector(4 downto 0);
		  di: in std_logic;
		  do: out std_logic);
end LUT_RAM;
architecture Behave of LUT_RAM is
type bit_array is array(0 to 15) of std_logic;
signal RAM_0: bit_array;
begin

process(clk)
begin
	if rising_edge(clk) then
		if wr_en='1' then
		 RAM_0(conv_integer(unsigned(addr)))<=di;
		end if;
	end if;
end process;

do<=RAM(conv_integer(unsigned(addr)));
end Behave;

Depending on RAM size you want. Change

Code:
addr: in std_logic_vector(4 downto 0);
		  di: in std_logic -> std_logic_vector ...;
		  do: out std_logic-> std_logic_vector ...
and
Code:
type bit_array is array(0 to 16) of std_logic; -> type bit_array is array(0 to Size-1) of std_logic_vector ...;

if you want to precharge the RAM:

Code:
signal RAM_0: bit_array:=('0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0','0');
Here is the Code that I'm using for a quad port ram with write and read address independents using blockRAM.
Here is the link to the code "
www
.velocityreviews
.com/forums/t665498-quad-port-ram.htm"
 
R

Rash

Hi Rash

I believe your planning to run the rocognition in parallel -
But you must realize that RAM memories normally will only allow you to
access one word/byte at a given moment.
Inside a FPGA (specially Xilinx Spartan and Virtex families) can you
find Block RAMs (each block = 16kbit)
They are able to provide data at 1,2,4,9,18 and 36 bit wide data-
busses - moreover are they Dual-ported.

I you put all those block RAMs in parallel will you be able to process
a quite large portion of your
neural network in parallel. (I believe)

If your able to define RAM as distributed memory (LUT-based) will you
come one step closer to the
true parallel circuit - but the hardware cost will be huge.

hope you found this useful

Jeppe

hi Jeppe,
Thank you for your useful information. Do u have any idea of how to
construct a RAM in Simulink.


Thanks
Rash
 
T

Tricky

Hi Rash

I believe your planning to run the rocognition in parallel -
But you must realize that RAM memories normally will only allow you to
access one word/byte at a given moment.
Inside a FPGA (specially Xilinx Spartan and Virtex families) can you
find Block RAMs (each block = 16kbit)
They are able to provide data at 1,2,4,9,18 and 36 bit wide data-
busses - moreover are they Dual-ported.

I you put all those block RAMs in parallel will you be able to process
a quite large portion of your
neural network in parallel. (I believe)

If your able to define RAM as distributed memory (LUT-based) will you
come one step closer to the
true parallel circuit - but the hardware cost will be huge.

hope you found this useful

Jeppe

In altera stratix familiies, the rams are 512bit (1-18 bit words),
4kbit (1-36 bit words) and 512kbit (8-144bit words) in size, and you
get all three types on each chip.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top