Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 77125

Article: 77125
Subject: Using EDK libraries in ISE
From: "Harish" <harish.vutukuru@gmail.com>
Date: 23 Dec 2004 17:40:38 -0800
Links: << >>  << T >>  << A >>
Hello all,

How can we simulate the EDK IP cores in ISE? I created a new project in
ISE and copied the vhdl files that comes with EDK onto a new file and
tried to synthesize it. However the synthesis failed as the library
referred to in the design was not seen by ISE. I'm trying to synthesize
an OPB ZBT controller. Can anyone tell me how to overcome this?

Thanks


Article: 77126
Subject: Re: Using low-core-voltage devices in industrial applications
From: Jim George <"jimgeorge at softhome dot net">
Date: Thu, 23 Dec 2004 19:49:02 -0700
Links: << >>  << T >>  << A >>
eliben@gmail.com wrote:
> Hello,
> 
> We're employing FPGAs in industrial conditions
> (wide temperature range, noise). Currently we
> are using ACEX1K. Some people are reluctant to
> move to the new technologies (Stratix, Stratix II)
> because of their low core voltage.
> 
> Are there are articles about the reliability of
> various FPGAs in industrial applications ? Any
> information about the lower reliability of 1.8
> and 1.5 volt core devices ?
> 
> Tx
> 

I think this question is more about the worse noise margin of 
low-voltage devices than thier MTBF. I've heard the same question being 
asked by designers of military products of 1.5V FPGAs like the V2Pro. 
Any info?

-Jim

Article: 77127
Subject: Re: Virtex II Pro Memory Questions
From: "Sandeep Kulkarni" <sandeep@insight.memec.co.in>
Date: Fri, 24 Dec 2004 09:06:41 +0530
Links: << >>  << T >>  << A >>
Hello Voxer,
                    As suggested by Andi, DSOCM would be ideal for you. You
might want to take a look at the Ultracontroller reference design, to take
clues how this bus has been cleverly used.
                You can implement a dp memory which has one port conneted to
the dsocm and the other to your logic.

Sandeep
"Andi" <00andi@web.de> wrote in message news:ee8abbb.0@webx.sUN8CHnE...
> Yes you can use BRAM for FPGA and PPC. You can connect on Port to the a
process outside the edk ppc system and the other port to the plb or opb bus
on the ppc system. If you want the ppc to work fast on that data then use
the dsocm interface to connect a bram to ppc. That interface is quit fast
and you do not need the plb or opb bus. So no traffic.



Article: 77128
Subject: mb-gcc bug ?
From: Rudolf Usselmann <russelmann@hotmail.com>
Date: Fri, 24 Dec 2004 14:15:51 +0700
Links: << >>  << T >>  << A >>


Looks like mb-gcc doesn't handle overflows correctly:

Xuint32 mask, bits;
mask = 0xffffffff << bits;

The above code works fine for 'bits' value 0..31.  When bits
is larger than 31, e.g. 32, I would expect for 'mask' to be
0x00000000. BUT, I get 0xffffffff.

There is an easy work around - if you know about the problem.
Or is my expectation wrong ?

rudi               
=============================================================
Rudolf Usselmann,  ASICS World Services,  http://www.asics.ws
Your Partner for IP Cores, Design, Verification and Synthesis

Article: 77129
Subject: Timing simulation : BRAM simulation proble
From: fwj_733 <fwj@nmrs.ac.cn>
Date: Fri, 24 Dec 2004 01:08:11 -0800
Links: << >>  << T >>  << A >>
I use a blockram in my FPGA design, when I use ISE5.2 generate the post simulation, and use Model sim SE 5.6 to simulate, it works well. But when upgrade to ISE6.2, cause a lot of problem. Model sim issue a lot of such warnings:
* Warning: /X_RAMB4_S16 HOLD High VIOLATION ON ADDR(2) WITH RESPECT TO CLK; # Expected := 352578022931546.37 ns; Observed := 7.894 ns; At : 11.564 ns # Time: 11564 ps Iteration: 1 Instance: /synth_demux_synth_tb_vhd_tb/uut/fir_i_b6096 Time: 1095432 ps Iteration: 1 Instance: /synth_demux_synth_tb_vhd_tb/uut/fir_i_b6096 # ** Warning: /X_RAMB4_S16 HOLD High VIOLATION ON DI(3) WITH RESPECT TO CLK; # Expected := 352578950644482.37 ns; Observed := 11.919 ns; At : 1095.589 ns The warning is very confusing: how can I expect it valid after 352578950644482.37 ns? The BlockRam output all turned to "X". I check the SDF file generated by ISE 6.2, it's setup and hold time is(SETUPHOLD (posedge DI[4]) (posedge CLK) (1042:1042:1042) (0:0:0))(4135:4135:4135)). I think, holdd expected time should be holdhigh or holdlow. Model sim must have calcualte wrong, but as it work well for ISE5.2, seems it has sth to do with ISE6.2. Could any one provide help? thank you all. Best regards.

Article: 77130
Subject: Re: PCI doubt
From: "Vitus" <vitus.maps@inbox.ru>
Date: Fri, 24 Dec 2004 13:42:30 +0300
Links: << >>  << T >>  << A >>

> - nearly all the pci cards that an end-user like me uses, are target
> only devices. (there is another category called bus-maters, which i was
> unaware of.)
>
> so naturally i thought - if all devices are targets then the master
> must be the arbitar; and as it initiates the transaction, i could call
> it as an initiator!
>
> the right thing is (plz verify) -
> 1. nearly all devices are target-only.

I think there exist as much masters as slaves do! In any case how can you -
end user - define is the given card (all cards in the world)  master or
slave?

> 2. bus-maters are also pci cards that can initiate transaction.

Yes. And Arbiter grands the bus to the master ar does not.





Article: 77131
Subject: Re: mb-gcc bug ?
From: "Jon Beniston" <jon@beniston.com>
Date: 24 Dec 2004 04:05:23 -0800
Links: << >>  << T >>  << A >>
Hi Rudi,

I'm afriad your expectations are wrong. This is undefined in C. For
example, if you try:

int d;
d << 33;

The compiler will correctly give you the warning:
warning: left shift count >= width of type

Cheers,
JonB


Article: 77132
Subject: Re: PCI doubt
From: "Shreyas Kulkarni" <shyran@gmail.com>
Date: 24 Dec 2004 16:19:39 -0800
Links: << >>  << T >>  << A >>
i was told that the cards that have bus-mastering support have the same
mentioned as a feature in their manuals. as far as i m concerned, i
haven't come across any card that has this type of feature listed,
except that i have read somewhere that some scsi cards have
bus-mastering capability.

that's why i think many of the pci devices are target only.

will somebody shed some light on this ?

also, if that is wrong, then will somebody plz mention currently
available bus-master devices which are used by end-users?


Article: 77133
Subject: Re: PCI doubt
From: pdstroud@gmail.com
Date: 24 Dec 2004 17:36:06 -0800
Links: << >>  << T >>  << A >>
Before ethernet was embedded in the chipset, NICs were PCI cards. The
good ones were PCI bus masters. That's one example. Others include
bridges and modems. Any application that would prefer to move data
without bothering the CPU (that is, DMA) could be a master.

Target devices are generally more prevalent in the PC but PCI is used
for much more than a system 'local bus'.
http://www.osronline.com/lists_archive/ntdev/thread2488.html


Article: 77134
Subject: Re: XST Question
From: Sriraman <sriraman_sri@rediffmail.com>
Date: Fri, 24 Dec 2004 21:12:32 -0800
Links: << >>  << T >>  << A >>
We are using a 20 input (each 8 bit wide) coregen instantiated MUX operating at 160 MHz on a Vitex-E device. When placed and routed in ISE6.2i, we get that the timing-error that the mux select lines are having fanout problem. We tried duplicating the net source, but still the problem persists. Any Help please.

Article: 77135
Subject: Re: EDK Bug ?
From: "avrbasic" <avrbasic@hotmail.com>
Date: Sat, 25 Dec 2004 11:49:55 +0100
Links: << >>  << T >>  << A >>
Dear Mr Usselman,

the IP core you did include in your posting is originally from uClinux MB
Vanilla distribution from John Williams. This IP core is defenetly useable
in EDK 6.2 at least. EDK 6.3 has changed a little the .MPD parsing rules so
for the 6.3 it should be modified a little. However I am 100% positive that
John Williams has updated the MBvanilla and related IP cores to 6.3 as I do
know that he works tightly with Xilinx EDKI group in order to work out the
remainin issues that as until today still prevent the use of xilinx EDK GCC
tools to succesfully compiled uClinux.

as of the problem with missing generics, here I can only suggest one generic
rule for EDK - you simple have to find a way to make EDK happy, one way
would be to add some dummy generics lets say C_FAMILY to the core that
causes the problems with empty generic section.

hope it helps,
Antti Lukats


"Rudolf Usselmann" <russelmann@hotmail.com> wrote in message
news:cqe4d5$mkf$1@nobel.pacific.net.sg...
>
>
> When creating custom IP Cores for inclusion in to EDK projects, I stumbled
> across the following problem:
>
> If a custom core (in VHDL) does not have a generic section
> as for example:
>
> -----------------------------------------------------------
> library IEEE;
> use IEEE.STD_LOGIC_1164.ALL;
> use IEEE.STD_LOGIC_ARITH.ALL;
> use IEEE.STD_LOGIC_UNSIGNED.ALL;
> library UNISIM;
> use UNISIM.VComponents.all;
>
> entity my_inverter is
> port (  I       : in STD_LOGIC;
>         O       : out STD_LOGIC
>      );
> end my_inverter;
>
> architecture arch_my_inverter of my_inverter is
>
> begin
>         O <= not I;
> end arch_my_inverter;
> -----------------------------------------------------------
>
> EDK generates a wrapper WITH an EMPTY generic section:
>
> -----------------------------------------------------------
> library IEEE;
> use IEEE.STD_LOGIC_1164.ALL;
>
> library UNISIM;
> use UNISIM.VCOMPONENTS.ALL;
>
> library my_inverter;
> use my_inverter.all;
>
> entity rst_inverter_wrapper is
>   port (
>     I : in std_logic;
>     O : out std_logic
>   );
> end rst_inverter_wrapper;
>
> architecture STRUCTURE of rst_inverter_wrapper is
>
>   component my_inverter is
>     generic (
>
>     );
>     port (
>       I : in std_logic;
>       O : out std_logic
>     );
>   end component;
>
> begin
>
>   rst_inverter : my_inverter
>     generic map (
>
>     )
>     port map (
>       I => I,
>       O => O
>     );
>
> end architecture STRUCTURE;
> -----------------------------------------------------------
>
> And than complains when compiling the code it just generated:
>
> Compiling vhdl file
> /home/rudi/projects/system/synthesis/hdl/rst_inverter_wrapper.vhd in
> Library work.
> ERROR:HDLParsers:164 -
> /home/rudi/projects/system/synthesis/hdl/rst_inverter_wrapper.vhd Line 25.
> parse error, unexpected CLOSEPAR, expecting IDENTIFIER
> -->
>
>
> Is this a bug or am I doing something wrong ?
>
> This occurs with EDK 6.2 and EDK 6.3 (both latest patch levels)
> on a linux system.
>
> Thanks !
> rudi
> =============================================================
> Rudolf Usselmann,  ASICS World Services,  http://www.asics.ws
> Your Partner for IP Cores, Design, Verification and Synthesis



Article: 77136
Subject: Re: mb-gcc bug ?
From: "avrbasic" <avrbasic@hotmail.com>
Date: Sat, 25 Dec 2004 11:57:53 +0100
Links: << >>  << T >>  << A >>
Dear Mr Usselman,

the behaviour you describe is expected behaviour as of following
considerations:

if you compile something like

    var = var1 << var2;

with hardware barrelshifter then compiler generates (must! generate)
hardware barrel shift functions.

as per MB ISA referenc the barrelshift instructions only use lower 5 bit
truncating the upper bits. Those shift of 32 would be 0 shifts if exectuded
with hardware shift.

if you compiled with no hardware barrel shift then the code generated should
be 100% functionally the same was with hardware shifts, and if the compiled
interprets it that way then it is correct behaviour.

Antti Lukats



"Rudolf Usselmann" <russelmann@hotmail.com> wrote in message
news:cqgfog$dtc$1@nobel.pacific.net.sg...
>
>
> Looks like mb-gcc doesn't handle overflows correctly:
>
> Xuint32 mask, bits;
> mask = 0xffffffff << bits;
>
> The above code works fine for 'bits' value 0..31.  When bits
> is larger than 31, e.g. 32, I would expect for 'mask' to be
> 0x00000000. BUT, I get 0xffffffff.
>
> There is an easy work around - if you know about the problem.
> Or is my expectation wrong ?
>
> rudi
> =============================================================
> Rudolf Usselmann,  ASICS World Services,  http://www.asics.ws
> Your Partner for IP Cores, Design, Verification and Synthesis



Article: 77137
Subject: Re: timer-interrupt not recognized
From: Andi <00andi@web.de>
Date: Sat, 25 Dec 2004 03:03:46 -0800
Links: << >>  << T >>  << A >>
Did you try to use the ppc internal timer?

Article: 77138
Subject: Re: Xilinx Christmas present: EDK 6.3 !
From: "avrbasic" <avrbasic@hotmail.com>
Date: Sat, 25 Dec 2004 13:49:25 +0100
Links: << >>  << T >>  << A >>
Chipscope doesnt attach to LMB, a workaround is possible but too time
consuming.
No idea what processors was doing during idle but it can not be software
overhead for sure.

Anyway I finally have all EDK systems up and running again so it isnt a
problem any for me.
But if such problems happen with every minor EDK update then its a headache
of course.

Antti Lukats

"Matthew Ouellette" <nobody@nobody.com> wrote in message
news:cpqga4$bsn5@xco-news.xilinx.com...
> Antti,
>
> Can you add the LMB signals to your ChipScope trace?  What is the
> processor doing during these 512 cycles of inactivity in between GPIO
> writes?  One way to help find this out is to cross-reference the objdump
> output of the ELF file with the I-LMB to find out where in your code the
> processor is.
>
> One thing you may want to try to do with to replace the GPIO drivers
> with simple XIo_Out32 commands that may take fewer processor clock
> cycles to execute.   It's possible the during this 512 clock cycle OPB
> inactivity, MicroBlaze is actually executing the rest of the GPIO driver
> code.
>
> Matt
>
> Antti Lukats wrote:
> > Hi all,
> >
> > Christmas is closing so everybody is making presents. So is Xilinx. I
just
> > got mine! Read the story below:
> > ***********************************************************************
> > ISE/EDK/ChipScope update to 6.3
> >
> > In attempt to get our EDK based SoC systems up and running again in EDK
6.3
> > I ended up creating simplest possible SoC using BSB (because none of
working
> > EDK 6.2 projects worked after update no matter any attempts to get them
> > working). Attempted to debug in XMD: fatal disaster BRAM can be loaded
from
> > elf, can also look at the disassembly all is ok. Any attempt to trace or
> > execute simplest program and all goes pasta, not any more possible to
view
> > (or write) even to LMB RAM ! Then I added ChipScope bus analyzer core to
OPB
> > bus. And simplified the test application. Here is the source code:
> >
> > -----------------------------------------------------------
> > // Xilinx Christmas Lights application ver 1.0
> > while (1) {
> >    i++;
> >    WriteToGPOutput(XPAR_LED_7SEGMENT_BASEADDR, i);
> >  }
> > -----------------------------------------------------------
> > This is running in MicroBlaze SoC at system frequency 50MHz. Pretty much
all
> > leds should be lit, right? Or?
> >
> > It looks like (due to Christmas feeling !?) Xilinx tools have decided
for me
> > that my application should be "Christmas Lights" - because that how it
> > works! The LEDS are blinking in fancy true random fashion at "human"
blink
> > rate, ie very slowly. The visual effects are pretty cool, really!
> >
> > When looking in ChipScope (OPB bus analyzer) I see writes to the GPIO
port:
> > 0x00800000, 0x01000000,0x02000000,0xFFFFFFE,0xFFFFFFC... Those are 5
example
> > sequential writes to GPIO port (from the above program!), notice that
> > between the GPIO writes there is always more than 512 OPB clocks of no
OPB
> > bus activity.
> >
> > And yes, I did UNINSTALL ALL ISE/EDK/ChipScope before update, then
installed
> > all the new versions plus service packs, etc..
> >
> > ***********************************************************************
> >
> > Antti
> >
> >



Article: 77139
Subject: Re: Clock Synchronization
From: Klaus Schleisiek <Klaus.Schleisiek@spambin.com>
Date: Sat, 25 Dec 2004 13:59:03 +0100
Links: << >>  << T >>  << A >>
Neil schrieb:
> I am looking for some material about the various clock synchronization
> techniques, their advantages etc.

The following code works pretty reliably, no matter wether the input 
pulse is shorter or longer than the output domain clock CLK. On each 
input "event", the output produces a pulse one CLK cycle long.

----------------------------------------------------------------
-- metastable safe spike detector
----------------------------------------------------------------

Library IEEE;
USE IEEE.STD_LOGIC_1164.ALL;

ENTITY spike IS
      PORT (reset   : IN  STD_LOGIC;
            clk     : IN  STD_LOGIC;
            i       : IN  STD_LOGIC;
            o       : OUT STD_LOGIC);
END spike;

ARCHITECTURE rtl OF spike IS

ATTRIBUTE init  : STRING;

SIGNAL hold     : STD_LOGIC := '0'; ATTRIBUTE init OF hold : SIGNAL IS "0";
SIGNAL edge_d   : STD_LOGIC_VECTOR(1 DOWNTO 0);
SIGNAL edge     : STD_LOGIC;
SIGNAL edge_set : STD_LOGIC;

BEGIN

o <= edge;

edge_set <= i AND NOT (edge OR hold OR edge_d(1));

async_edge : PROCESS (edge_set, clk)
BEGIN
   IF  edge_set='1'  THEN
      edge_d <= "11";
   ELSIF  rising_edge(clk)   THEN
      edge_d <= edge_d(0) & '0';
      edge <= '0';
      IF  edge_d="10"  THEN
        edge <= '1';
      END IF;
      IF  reset='1'  THEN
         edge_d <= "00";
      END IF;
   END IF;
END PROCESS async_edge;

pulse_holdoff : PROCESS (edge_set, i)
BEGIN
   IF  edge_set='1'  THEN
      hold <= '1';
   ELSIF  falling_edge(i)  THEN
      hold <= '0';
   END IF;
END PROCESS pulse_holdoff;

END rtl;

Klaus Schleisiek

kschleisiek AT XYfreenet.de
If you want to send me an e-mail, use above address and remove XY

Article: 77140
Subject: Synchronous design and power consumption
From: Klaus Schleisiek <Klaus.Schleisiek@spambin.com>
Date: Sat, 25 Dec 2004 14:12:04 +0100
Links: << >>  << T >>  << A >>
Can anybody give me hard facts on the power consumption ramifications 
for the following two design styles:

a) Fully synchronous design with appropriate clock enable signals for 
"slower" clock domain areas of the design.

b) Asynchronous design generating slower gated clock signals for those 
slow clock domain areas of the design.

In a), each flip-flop has to load the clock input capacitors on each 
clock transition, even if the clock enable signal is false and that will 
consume energy. But how much?

In b), we are sure to conserve energy, but at the cost of a dramatic 
increase in design complexity, because we have to use signal 
synchronisation contraptions whenever we go from one clock domain to 
another clock domain.

Is the added complexity of approach b) really worth the power savings I 
get out of it?

:)

Klaus Schleisiek

kschleisiek AT XYfreenet.de
If you want to send me an e-mail, use above address and remove XY

Article: 77141
Subject: Re: Clock Synchronization
From: Elder Costa <elder.costa@terra.com.br>
Date: Sat, 25 Dec 2004 12:17:13 -0200
Links: << >>  << T >>  << A >>
Neil wrote:

> Not PLL, I am looking at various types of synchronization techniques
> for signals crossing clock boundaries.
> 
> - Neil
> 

Is this the kind of stuff you are after?

http://www.cadence.com/whitepapers/cdc_wp.pdf
http://www.chipdesignmag.com/display.php?articleId=32&issueId=5
http://www.edn.com/article/CA310388.html

Also
http://www.google.com/search?q=crossing+clock+domains

HTH.

Regards.

Elder.

Article: 77142
Subject: Re: Using EDK libraries in ISE
From: "avrbasic" <avrbasic@hotmail.com>
Date: Sat, 25 Dec 2004 16:41:52 +0100
Links: << >>  << T >>  << A >>
set up compile scripts and run tools from shell or commandline.
attempting to load them to project navigotor is likely to cause problems.
similar things exists with LEON3 system

it compiles with no problem from script but is not possible to load into
project navigator, PN simple messes up with the libraries.

Antti


"Harish" <harish.vutukuru@gmail.com> wrote in message
news:1103852073.061564.154760@f14g2000cwb.googlegroups.com...
> Hello all,
>
> How can we simulate the EDK IP cores in ISE? I created a new project in
> ISE and  copied the vhdl files that comes with EDK onto a new file and
> tried to synthesize it. However the synthesis failed as the library
> referred to in the design was not seen by ISE. Can anyone tell me how
> to overcome this?
>
> Thanks
>



Article: 77143
Subject: Re: edk-chipscope 6.2 to 6.3 update
From: "avrbasic" <avrbasic@hotmail.com>
Date: Sat, 25 Dec 2004 16:53:30 +0100
Links: << >>  << T >>  << A >>

"Bob Perlman" <bobsrefusebin@hotmail.com> wrote in message
news:vuces0ppfo7jake9u20r25um83r24ebutn@4ax.com...
> On Mon, 20 Dec 2004 10:36:56 -0800, "Symon" <symon_brewer@hotmail.com>
> wrote:
>
> >Except that it's not good enough. The storage qualifier should be a clock
> >enable for the whole unit. Otherwise, P&R can stop because it's
impossible
> >to meet the clock timing recommendations, even though it'd be OK if the
> >qualification was done with an enable. When you select a clock for the
ILA,
> >you should also be able to select an enable to associate with that clock.
> >The enable should connect to the CE pins of all the storage devices. You
can
> >use an AND gate for the WE on the SRLUTs, I suppose. C'mon Xilinx, FIX
IT!
> >Or release the HDL for ChipScope so we can fix it for you. Don't make me
> >reverse engineer it! ;-)
> >Cheers, Syms.
>
> Yes!  I agree 470%!  (My approval percentages are arrived at with the
> same methods FPGA manufacturers use to come up with gate counts.)
>
> Bob Perlman
> Cambrian Design Works

The storage qualifier is defenetly useful for some cases. But yes a more
open OnChip Instrumentation would be nice. But It looks like none of the
FPGA vendors is interested in it :(
RE of ChipScope isnt so complex but it doesnt make much sense, it would
better to design from ground up!

ChipScope ICON is actually a JTAG "Hub" that creates 15 virtual JTAG chains
with 2 pairs of 16 update signals per port.

control[35:0] actually is
TDI
TDO
TCK
update_lo[15:0]
update_hi[15:0]

update_lo[0] is used by all cores as enable for Serial ROM that includes the
core ID and parameters. This serial ROM is scanned when Chipscope analyzer
connects.

Altium Livedesign uses different approuch: for each core separate JTAG TAP
instance is added, all those are added into secondary "soft" JTAG chain that
is controlled over modified Xilinx Cable III where additional pins are
allocated to the secondary soft chain.

Altera SignalTap is I think more similar to ChipScope where "hub" is added
the FPGA intgernal TAP access primitive.

I do have some simple IP cores (in verilog) that can be connected to ICON
and controlled by ChipScope analyzer, some JAM scripts that can work with
ChipScope cores and some Windows application that also can trigger and read
from ILA core.

please email me in private in case of interest.

Antti Lukats



Article: 77144
Subject: Re: Synchronous design and power consumption
From: "RobJ" <rsefton@abc.net>
Date: Sat, 25 Dec 2004 10:52:20 -0800
Links: << >>  << T >>  << A >>
Klaus Schleisiek wrote:
> Can anybody give me hard facts on the power consumption ramifications
> for the following two design styles:
>
> a) Fully synchronous design with appropriate clock enable signals for
> "slower" clock domain areas of the design.
>
> b) Asynchronous design generating slower gated clock signals for those
> slow clock domain areas of the design.
>
> In a), each flip-flop has to load the clock input capacitors on each
> clock transition, even if the clock enable signal is false and that
> will consume energy. But how much?
>
> In b), we are sure to conserve energy, but at the cost of a dramatic
> increase in design complexity, because we have to use signal
> synchronisation contraptions whenever we go from one clock domain to
> another clock domain.
>
> Is the added complexity of approach b) really worth the power savings
> I get out of it?
>

Klaus -

I can't give you hard numbers, but in an FPGA I think the power savings of 
b) vs. a) would be extremely small, possibly not even measurable. In a 
custom ASIC, where the clock trees themselves can consume a lot of power, it 
may be worth the effort. In an FPGA, where clock routing is a dedicated 
resource, that is not a factor. Also, flip-flops generate switching currents 
when their output state changes, not when their clock input toggles. I 
wouldn't even try to quantify the power difference between a) and b). For 
several reasons, go with a) and don't look back!

Rob 



Article: 77145
Subject: Re: PCI doubt
From: "Purvesh" <purveshkhona@yahoo.com>
Date: 25 Dec 2004 10:54:45 -0800
Links: << >>  << T >>  << A >>
Hi All,
I have been designing PCI/PCI-X cores for many many years - here is
small note on masters/initiators/targets/arbiters/completors etc.

PCI
===

Masters:
------------
Masters are the devices that have capability of starting the
transaction on the bus. They control how long the transaction remains
active unless pre-empted by other masters(with the helpof arbiter) or
premature transaction termination occurs.

Slaves or Targets:
--------------------------

These are the devices that respond to transactions initiated by master.
They have ability to break the transaction pre-maturely.

Arbiters
-----------:

Since PCI is a bused based system, there could be numerous masters on
the same bus which could be asking for bus resource, hence you need an
arbiter which determines which master can access the bus (based on
internal algorithm not defined in spec). Generally arbiters are on
system mother boards in a chip called central resource. However for
embedded systems, the arbiter can reside on any one of the master. Also
lots of cpu have built in pci arbiter. You can have only one arbiter
per bus unless distributed arbitration schema is designed for (very
hard).

PCI-X
=====

Initiators
------------

Initiators are similar to masters of PCI.

Completors
-----------------

Completors are similar to targets of PCI with added capability of
becoming bus master to complete a pending split transaction. Split
transactions are normal transactions started by initiator, but
completor realises that it cannot complete the transaction at that
particular time so tells master to back off and says when it has read
data available, it will become bus master and provide initiator with
the data. Note only read transactions can be split by completor and
initiator cannot re-split the split completion transaction.

for a system you need both masters/initiators as well
slaves/completors.
Hope this clears some air. If not send me personal email.

-Purvesh


Article: 77146
Subject: Re: Synchronous design and power consumption
From: "RobJ" <rsefton@abc.net>
Date: Sat, 25 Dec 2004 11:03:31 -0800
Links: << >>  << T >>  << A >>
Klaus Schleisiek wrote:
> I'm talking battery operated missions here. Yes, I am interested in 10
> mWatts savings.
>

I think your best bet then is to get an FPGA eval board and run some 
experiments. Not with your real design. Just build a design with a bunch of 
counters running at different rates. Then try a) vs. b) and measure the 
actual core current consumption. You've got me curious now. If you do it 
please post your results.

Rob 



Article: 77147
Subject: PS: Synchronous design and power consumption
From: Klaus Schleisiek <Klaus.Schleisiek@spambin.com>
Date: Sat, 25 Dec 2004 20:05:37 +0100
Links: << >>  << T >>  << A >>
I'm talking battery operated missions here. Yes, I am interested in 10 
mWatts savings.

Klaus Schleisiek schrieb:
> Can anybody give me hard facts on the power consumption ramifications 
> for the following two design styles:
> 
> a) Fully synchronous design with appropriate clock enable signals for 
> "slower" clock domain areas of the design.
> 
> b) Asynchronous design generating slower gated clock signals for those 
> slow clock domain areas of the design.
> 
> In a), each flip-flop has to load the clock input capacitors on each 
> clock transition, even if the clock enable signal is false and that will 
> consume energy. But how much?
> 
> In b), we are sure to conserve energy, but at the cost of a dramatic 
> increase in design complexity, because we have to use signal 
> synchronisation contraptions whenever we go from one clock domain to 
> another clock domain.
> 
> Is the added complexity of approach b) really worth the power savings I 
> get out of it?
> 
> :)
> 
> Klaus Schleisiek
> 
> kschleisiek AT XYfreenet.de
> If you want to send me an e-mail, use above address and remove XY


Article: 77148
Subject: Re: PS: Synchronous design and power consumption
From: "Purvesh" <purveshkhona@yahoo.com>
Date: 25 Dec 2004 11:07:39 -0800
Links: << >>  << T >>  << A >>
Hi,

I don't have hard nos. either but definately clock running all the time
is going to consume maximum power. As far as clock gating is concerned,
its is definately worth the effort if you want to have battery
operation with FPGAs, but that means that you won't be able to use
clock routing resources since you will be generating gated clock. Be
extremely careful in this case - I would recommed running formal
verification tool and also run gate level simulations with SDF.

If you don't mind waiting, I believe Stratix II or Virtex IV is going
to have true gated clock support. If you use clock enable, you are only
saving flop switching power.

-Purvesh


Article: 77149
Subject: SATA/SAS designs with FPGA
From: "Purvesh" <purveshkhona@yahoo.com>
Date: 25 Dec 2004 11:28:18 -0800
Links: << >>  << T >>  << A >>
Hi All,

Anyone implemented SATA/SAS with FPGAs. Seems that neither rocketIO nor
MGT in stratix GX are capable of handling OOB signalling of SATA/SAS.

My question is : Which serdes did you use to work around the OOB
problem ?

-Purvesh




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search