V
valentin tihhomirov
Hello,
what if you clock one trigger by std_logic CLK and another by
to_bit(CLK)? What if another clock is to_stdulogic(to_bit(std)). We have
std-to-bit-to-std converter on the clock line in this case. Any VHDL HW
engeneer has a feeling that the conversion is redundant and will be
"optimized out" by synthesizer. In my case, the intermediate value is of
tri-valued type instead of bit, but idea is the same.
Below is the code that intermediately converts an std_logic CLK into a
multibit type and then makes an inverse conversion. The clock never
takes the 3rd value; it is just a convenient way to pass a signal from a
two-valued circuit to a multivalued one that has its clock also
multivalued for convenience. During logic optimization, synthesizer
should replace the 'doniting' converters with a plain wire; so, the
behavour must be like there is a single clock net.
To demonstrate equivalence of the original CLK and the final bitCLK, I
put two regs in pipeline and expect that the following register Q
reproduces the leading D with a single clock cycle delay. Unfortunately,
all RTL simulators I tried (Symphony, Modelthech and my favorite
Active-HDL) agree on something different: Q is fetched the same value as
D simultaneously, without the cycle shift. That is because D is updated
earlier than the clock event reaches Q (I suspect that converter delays
evaluation of clk at Q by 2 delta-cycles). Nevertheless, synthesizer
does not disappoint me: XST removes the unnecessary conversion functions
and gives the implementation the desired pipeline behaviour.
The experiment reveals that simulators 1) refuse to model the FF
behaviour (that requires to fetch the value active at reg input in the
moments preceding the CLK rising edge) 2) nor aim to predict the
synthesised HW behaviour. Nevertheless, the simulators have no
difficulty handling the widely used std-bit conversion:
bitCLK <= std_logic_1164.to_bit(stdlogicCLK);
process begin wait until bitCLK='1'; ...
It is this deceptive success of this given sample that mislead me to my
design. Can you explain why the simulators démarche in case of my
conversion?
I always have problems finding the LRM. Does it require that the sync
clocks are the same wire (not just logically identical)? I have resolved
the issue by balancing the delays (the clock of the first reg is also
null-converted) but it looks fragile and I do not understand why no
balancing is needed in case of std-bit conversion? Or I mistake and the
latter is also unreliable?
use IEEE.STD_LOGIC_1164.ALL;
entity BIST is
port (
CLK: std_logic;
Q: out std_logic
);
end entity;
architecture RTL of BIST is
type TRIVAL is ('U', '0', '1');
FUNCTION BIT_TO_TRIVAL(b: bit) return TRIVAL is -- returns TRIVAL
equivalent of bit
begin
if b = '0' then return '0';
else return '1';
end if;
end function;
signal triCLK: TRIVAL;
signal D: std_logic := '0';
begin
SQUARE_GEN: process
begin
wait until clk = '1';
D <= not D;
end process;
triCLK <= BIT_TO_TRIVAL(to_bit(CLK));
TRIVAL_CLOCKED: block
signal bitCLK: bit;
FUNCTION To_bit ( a : TRIVAL; umap : BIT := '0') RETURN BIT IS
BEGIN
CASE a IS
WHEN '0' => RETURN ('0');
WHEN '1' => RETURN ('1');
WHEN OTHERS => RETURN umap;
END CASE;
END;
begin
bitCLK <= To_bit(triCLK); -- TRIVAL clock to bit
process begin
--wait until CLK='1'; -- this is ok
wait until bitCLK='1'; -- this causes problems
Q <= D;
end process;
end block;
end RTL;
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity BIST_TB is
end entity;
architecture BEH of BIST_TB is
signal CLK: std_logic := '0';
begin
process begin
loop
CLK <= not CLK; wait for 50 ns;
end loop;
end process;
BIST_U: entity work.BIST
port map(CLK);
end architecture;
Thank you for participation.
what if you clock one trigger by std_logic CLK and another by
to_bit(CLK)? What if another clock is to_stdulogic(to_bit(std)). We have
std-to-bit-to-std converter on the clock line in this case. Any VHDL HW
engeneer has a feeling that the conversion is redundant and will be
"optimized out" by synthesizer. In my case, the intermediate value is of
tri-valued type instead of bit, but idea is the same.
Below is the code that intermediately converts an std_logic CLK into a
multibit type and then makes an inverse conversion. The clock never
takes the 3rd value; it is just a convenient way to pass a signal from a
two-valued circuit to a multivalued one that has its clock also
multivalued for convenience. During logic optimization, synthesizer
should replace the 'doniting' converters with a plain wire; so, the
behavour must be like there is a single clock net.
To demonstrate equivalence of the original CLK and the final bitCLK, I
put two regs in pipeline and expect that the following register Q
reproduces the leading D with a single clock cycle delay. Unfortunately,
all RTL simulators I tried (Symphony, Modelthech and my favorite
Active-HDL) agree on something different: Q is fetched the same value as
D simultaneously, without the cycle shift. That is because D is updated
earlier than the clock event reaches Q (I suspect that converter delays
evaluation of clk at Q by 2 delta-cycles). Nevertheless, synthesizer
does not disappoint me: XST removes the unnecessary conversion functions
and gives the implementation the desired pipeline behaviour.
The experiment reveals that simulators 1) refuse to model the FF
behaviour (that requires to fetch the value active at reg input in the
moments preceding the CLK rising edge) 2) nor aim to predict the
synthesised HW behaviour. Nevertheless, the simulators have no
difficulty handling the widely used std-bit conversion:
bitCLK <= std_logic_1164.to_bit(stdlogicCLK);
process begin wait until bitCLK='1'; ...
It is this deceptive success of this given sample that mislead me to my
design. Can you explain why the simulators démarche in case of my
conversion?
I always have problems finding the LRM. Does it require that the sync
clocks are the same wire (not just logically identical)? I have resolved
the issue by balancing the delays (the clock of the first reg is also
null-converted) but it looks fragile and I do not understand why no
balancing is needed in case of std-bit conversion? Or I mistake and the
latter is also unreliable?
use IEEE.STD_LOGIC_1164.ALL;
entity BIST is
port (
CLK: std_logic;
Q: out std_logic
);
end entity;
architecture RTL of BIST is
type TRIVAL is ('U', '0', '1');
FUNCTION BIT_TO_TRIVAL(b: bit) return TRIVAL is -- returns TRIVAL
equivalent of bit
begin
if b = '0' then return '0';
else return '1';
end if;
end function;
signal triCLK: TRIVAL;
signal D: std_logic := '0';
begin
SQUARE_GEN: process
begin
wait until clk = '1';
D <= not D;
end process;
triCLK <= BIT_TO_TRIVAL(to_bit(CLK));
TRIVAL_CLOCKED: block
signal bitCLK: bit;
FUNCTION To_bit ( a : TRIVAL; umap : BIT := '0') RETURN BIT IS
BEGIN
CASE a IS
WHEN '0' => RETURN ('0');
WHEN '1' => RETURN ('1');
WHEN OTHERS => RETURN umap;
END CASE;
END;
begin
bitCLK <= To_bit(triCLK); -- TRIVAL clock to bit
process begin
--wait until CLK='1'; -- this is ok
wait until bitCLK='1'; -- this causes problems
Q <= D;
end process;
end block;
end RTL;
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity BIST_TB is
end entity;
architecture BEH of BIST_TB is
signal CLK: std_logic := '0';
begin
process begin
loop
CLK <= not CLK; wait for 50 ns;
end loop;
end process;
BIST_U: entity work.BIST
port map(CLK);
end architecture;
Thank you for participation.