Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 114125

Article: 114125
Subject: ISE 8.2sp3 clobbering source file timestamps?
From: "Brian Davis" <brimdavis@aol.com>
Date: 4 Jan 2007 19:58:24 -0800
Links: << >>  << T >>  << A >>
 Against my better judgment, I tried using 8.2sp3 again tonight
by converting and building an older, working 6.3 project.

 I created a new build directory, copied in the old .npl and .ucf
files, pointed 8.2sp3 at the old .npl project file, said yes to the
.npl format translation messages, then did a bitstream build.

 When I went back to my external text editor, I was greeted by many
warnings that my open source files { stored outside the project
directory, not 'local copies' } had been updated outside the editor.

 Checking further, the file contents matched my backups, but all of
the source file timestamps had suspiciously been changed to the
exact same time/date as the creation of the .zip file used by ISE
for the old project backup, as though ISE had decided to "touch"
all the source files ( see directory listings below ).

Has anyone else seen this behavior ?

Does this only happen during the import of an old .npl file, or does
the ISE8.2 project manager decide to clobber the file timestamps
of all user sources on other special occasions ?

Brian

.\syn_xst_8v2
  01/04/2007  09:57 PM             2,158 evb_ise7_bak.zip

.\cores\xc_uart

  01/04/2007  09:57 PM            11,705 kcuart_tx.vhd
  01/04/2007  09:57 PM            10,376 kcuart_rx.vhd
  01/04/2007  09:57 PM             8,392 bbfifo_16x8.vhd
  01/04/2007  09:57 PM             5,120 uart_rx.vhd
  01/04/2007  09:57 PM             5,192 uart_tx.vhd

.\vhdl
  01/04/2007  09:57 PM             3,302 flip.vhd
  01/04/2007  09:57 PM            17,751 evb.vhd
  01/04/2007  09:57 PM             3,190 rf.vhd
  01/04/2007  09:57 PM            14,408 rom_out.vhd
  01/04/2007  09:57 PM             3,882 rstack.vhd
  01/04/2007  09:57 PM            12,260 block_ram.vhd
  01/04/2007  09:57 PM             1,870 pw2_rom.vhd
  01/04/2007  09:57 PM             3,234 ffb.vhd
  01/04/2007  09:57 PM             2,610 bitcnt.vhd
  01/04/2007  09:57 PM             2,779 y1_config.vhd
  01/04/2007  09:57 PM            14,536 y1_constants.vhd
  01/04/2007  09:57 PM            52,444 y1_core.vhd


Article: 114126
Subject: Re: DC timing violation, what to do first?
From: "Thomas Stanka" <usenet_10@stanka-web.de>
Date: 5 Jan 2007 00:04:59 -0800
Links: << >>  << T >>  << A >>
Hi,

Davy schrieb:
> I am new to Synopsys DC. And I have a basic problem. When I find timing
> violation in DC report, what shall I do first?
>
> 1. Shall I change the script of DC? To let the tools do something like
> retiming?
> 2. Shall I change the RTL code? To pipeline the comb logic manually?
> 3. Other choice, please recommend.

I guess you are new to synthesis, too.
First don't worry, synopsys dc doesn't know your timing, it estimates
your timing. The real timing is available after layout.
If Synopsys DC estimates you have a little slack, I would try to go
ahead. Else you need more detailed changes of the RTL-code.

If you like to meet timing during synthesis, you need to learn
something about how synopsys estimates if you meet timing or not (See
documentation for a first step).

bye Thomas


Article: 114127
Subject: Altera Cyclone II die revision?
From: "Manfred Balik" <manfred.balik@tuwien.ac.at>
Date: Fri, 5 Jan 2007 09:09:45 +0100
Links: << >>  << T >>  << A >>
from the Cyclone II errata sheet:
The die revision is identified by the alphanumeric character (Z) before the 
fab code (first two alphanumeric characters) in the date code printed on the 
top side of the device.
A X?Z  ##   ####
         ^ Die Revision

my EP2C20F484C8N is labeld with (looks a little bit different :-( ):
K CAA9T0619A
is the die Revision A????
I need this to know, because I use dual clock FIFOs. (Do I need the 
workaround of the "M4K block write operations may fail ..."-error in the 
Revision A?)

Thanks, Manfred 



Article: 114128
Subject: Re: SUNDANCE FPGA CONFIGURATION
From: "Pablo" <pbantunez@gmail.com>
Date: 5 Jan 2007 00:39:40 -0800
Links: << >>  << T >>  << A >>

Martin Thompson wrote:
> Hi Pablo,
>
> "Pablo" <pbantunez@gmail.com> writes:
>
> > Hi, has anybody worked with a fgpa/dsp board solution from "sundance"?
> > I need some information about the configuration and "bitstream
> > download" in those boards. In the web, they say that you need Code
> > Composer Studio for the DSP and 3L Diamond for FPGA but I want to know
> > if some other tools like ISE, XPS and IMPACT can be used for that.
> >
> > The reference is
> > http://www.sundance.com/web/files/productpage.asp?STRFilter=SMT761Q.
> >
>
> Yes, you can use Impact to configure the FPGA via a header on the
> board (well, you can on the SMT374 - I assume the ithers can as well).
>
> If you want a standalone boot, you need to program the DSP to
> configure the FPGA by bitbanging.
>
> You don't *need8 the 3L diamond stuff.  You do need CCS if you want to
> write code for the DSP.
>
> We did this - we use the SMT374 for an embedded automotive image
> processing platform - we use none of Sundance's stuff at all!
>
> Cheers,
> Martin
>
> --
> martin.j.thompson@trw.com
> TRW Conekt - Consultancy in Engineering, Knowledge and Technology
> http://www.conekt.net/electronics.html

Thank you. I don't know the board in particular, but that is a DSP-FPGA
Hybrid. I have some practice with a Xilinx kit (Spartan 3e Starter kit)
and I hoped that it could serve me.

With respect to DSP I have CCS 3.1 and I only have to take some
practice to program the FGPA. Do you know any book or manual about
this?.

Thanks, again.

Pablo


Article: 114129
Subject: Re: ERROR:NgdBuild:604
From: "Guru" <ales.gorkic@email.si>
Date: 5 Jan 2007 00:59:56 -0800
Links: << >>  << T >>  << A >>
Maybe the wizard did it for you.
I was talking about the manual peripheral creation (i.e. without
wizard).

Cheers,

Guru


Venu wrote:
> Thanks Guru,
>
> I too was able to resolve the problem that I was facing ... I did as
> follows:
>
> 1)I copied the file custom_bram.edn into the pcores/<core>/netlist
> directory.
> 2)In the import peripheral wizard, I ticked the option to use netlist
> files also as source files for the peripheral , and entered the path of
> the netlist file specified above.
>
> Now I am not getting any errors on bitstream generation.... :)
>
> My solutions seems to be similar to yours except for one difference
> ..... in the directory created by Xilinx Core Lib , I DO NOT see any
> file custom_bram.bbd ...
>
> The *.bbd has been discussed in another posts also , but I am yet to
> come across it in any of my designs... I am using EDK8.02.02i and
> ISE8.02.03i.
>
> "bbd" stands for black box description ... when does this concept come
> into the design flow?
> 
> Thanks Again
> Venu


Article: 114130
Subject: Re: measure setup and hold time
From: "Lars" <larthe@gmail.com>
Date: 5 Jan 2007 01:20:37 -0800
Links: << >>  << T >>  << A >>

axr0284 wrote:
> Hi I am currently working on a RGMII interface using the SPARTAN 3E
> FPGA.
>
> I have 1 clock pin (phy_rx_clk) feeding into a DCM and 2 DCM output
> clk0 and clk180 being used in my design.
> There is also an external module which I have no control over that will
> be sending DDR data and clock with the data have a minimum setup time
> of 1.4 ns and minimum hold time of 1.2 ns.
>
> I need to measure the setup time of the data when it reaches the first
> flip flop of the DDR which is found in the IOB itself.
>
> So I setup the constraint to have 2 ns setup time wrt the input clock
> called phy_rx_clk
> Now the timing analyzer tells me that it actually needs a setup time of
> 3.9 ns and I am wondering why it needs such a long setup time.
>
> Wouldn't the DCM introduce some delay in the clock line wrt to the data
> line thus reducing the setup time.
>
> Is there anyway to decrease this setup time to what I need.
>
>
> ------------------------------------------------------------------------------------------------------
> * COMP "rgmii_rx_ctrl" OFFSET = IN 2 ns BEFORE COMP "phy_rx_clk" HIGH
>  | Requested  | Actual       | Logic  | Absolute   |Number of Levels
>  | 2.000ns       | 3.928ns    | 0        | -1.928ns   | 2
> ------------------------------------------------------------------------------------------------------
>
> Thanks for any answer.
> Amish

If your timing report states a negative value for hold time (and
assuming that this is not needed), you can trade (at least some) if
this time into less setup time.

Most likley the tools have inserted some input delay in the IO block to
assert a negative hold time for your input. This is normally not needed
when working with DCMs. Check IBUF_DEALY_VALUE and IFD_DELAY_VALUE in
the constraints guide. A quick check is to open the design in
FPGA-editor and have a look inside the IO block to see what values have
been assigned to these parameters. 

/Lars


Article: 114131
Subject: Spartan3E minimum clock-to-output (hold time)
From: "Lars" <larthe@gmail.com>
Date: 5 Jan 2007 01:24:57 -0800
Links: << >>  << T >>  << A >>
I've been looking all over but have been unable to find any numbers for
minimum clock-to-output propagation delay for Spartan3E. Anyone have a
clue as to what can be expected. The design is very straight forward: A
clock inut to a GCLK pin, a BUFG clock driver clocking an IO-block data
and output enable flip-flop. Maximum clock-to-output delay is reported
at some 6.8 ns. This matches fine the data in the Spartan3E data sheet
tables 85, 90 and 93 (5.51 ns + 0.43 ns + 0.70 ns for LVTTL clock input
and LVTTL 12 mA SLOW output).

But how can I find the minimum delay to satisfy the external component
hold time requirement of 1.5 ns? My reasoning that the fetch loop
clock-pad -> clock driver -> output ff -> output pad "ought" to be more
than 1.5 ns under all conditions, but it would shure feel better if I
could have it in writing!

/Lars


Article: 114132
Subject: Re: Spartan3E minimum clock-to-output (hold time)
From: "Lars" <larthe@gmail.com>
Date: 5 Jan 2007 01:56:03 -0800
Links: << >>  << T >>  << A >>

Lars wrote:
> I've been looking all over but have been unable to find any numbers for
> minimum clock-to-output propagation delay for Spartan3E. Anyone have a
> clue as to what can be expected? The design is very straight forward: A
> clock inut to a GCLK pin, a BUFG clock driver clocking an IO-block data
> and output enable flip-flop. Maximum clock-to-output delay is reported
> at some 6.8 ns. This matches fine the data in the Spartan3E data sheet
> tables 85, 90 and 93 (5.51 ns + 0.43 ns + 0.70 ns for LVTTL clock input
> and LVTTL 12 mA SLOW output).
>
> But how can I find the minimum delay to satisfy the external component
> hold time requirement of 1.5 ns? My reasoning that the fetch loop
> clock-pad -> clock driver -> output ff -> output pad "ought" to be more
> than 1.5 ns under all conditions, but it would shure feel better if I
> could have it in writing!
>
> /Lars

It seems that i (once again) didn't do enough of STFW before I posted.
There is a thread (Best Case Timing Parameters) covering this topic.
The conclusion seems to be to use 1/4 of the max value, except when
using DCMs, where this will be too optimistic. In my case, I seem to be
OK following this rule.

I still think there ought to be some data in the specs about this, as
many external components (SDRAMs, DSPs etc.) require a non-zero hold
time.

/Lars


Article: 114133
Subject: Re: SUNDANCE FPGA CONFIGURATION
From: Martin Thompson <martin.j.thompson@trw.com>
Date: 05 Jan 2007 11:22:25 +0000
Links: << >>  << T >>  << A >>
"Pablo" <pbantunez@gmail.com> writes:

> Martin Thompson wrote:
> > "Pablo" <pbantunez@gmail.com> writes:

> > If you want a standalone boot, you need to program the DSP to
> > configure the FPGA by bitbanging.
> >

> With respect to DSP I have CCS 3.1 and I only have to take some
> practice to program the FGPA. Do you know any book or manual about
> this?.
>

If you mean the bitbanging part - Sundance have sample code IIRC and
it was fairly simple.

You have to get the bitstream into your application somehow - we have
a python script which takes the bitstream and creates a .c file with a
big array full of numbers!

HTH!

Martin

-- 
martin.j.thompson@trw.com 
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html

   

Article: 114134
Subject: Re: Spartan3E minimum clock-to-output (hold time)
From: "Symon" <symon_brewer@hotmail.com>
Date: Fri, 5 Jan 2007 11:22:42 -0000
Links: << >>  << T >>  << A >>
"Lars" <larthe@gmail.com> wrote in message 
news:1167990963.608936.301440@42g2000cwt.googlegroups.com...
>
>
> I still think there ought to be some data in the specs about this, as
> many external components (SDRAMs, DSPs etc.) require a non-zero hold
> time.
>
Hi Lars,
I'm sure you know that you can use the DCM to choose whatever phase shift of 
your clock you need. (Provided you meet certain frequency constraints)
HTH, Syms. 



Article: 114135
Subject: Re: lead free bga pads
From: "Symon" <symon_brewer@hotmail.com>
Date: Fri, 5 Jan 2007 11:50:26 -0000
Links: << >>  << T >>  << A >>
"PeteS" <PeterSmith1954@googlemail.com> wrote in message 
news:1167756610.915575.324880@42g2000cwt.googlegroups.com...
>
> I'm facing that issue right now for a FT256 package, and I'll do what
> I've done with all the other RoHS BGA parts I've had done leadfree (and
> that's quite a few) - use non-SMD pads with the recommended sizes from
> the leaded versions.
>
> Cheers
>
> PeteS
>
Hi Pete,
I read somewhere* that engineers used solder mask defined (SMD) pads for 
early BGAs because the parts were made of ceramic and could pull the pads 
off the board during vibration testing. With a SMD pad I guess the pad area 
can be larger to stop this. When lighter packages came along, the SMD pads 
were no longer needed, but folks carried on with SMD pads maybe because** 
"that's the way we've always done it". However, it turns out that SMD pads 
have a proclivity for microvoids*** at the pad/solder boundary.
So, I wonder, now that Xilinx BGAs have metal heat spreaders built in, 
whether their mass has increased enough to make the SMD pad necessary once 
again. What do you think?

Cheers, Syms.

* http://www.smta.org/knowledge/proceedings_abstract.cfm?PROC_ID=252
** http://www.ojar.com/view_2857.htm
*** http://www.microvoids.org/ 



Article: 114136
Subject: Re: Surface mount ic's
From: Brian Drummond <brian_drummond@btconnect.com>
Date: Fri, 05 Jan 2007 12:25:02 +0000
Links: << >>  << T >>  << A >>
On 4 Jan 2007 11:46:15 -0800, "Andy Peters" <google@latke.net> wrote:

>Dave Pollum wrote:
>> Bob Perlman wrote:
>> > On Tue, 2 Jan 2007 10:14:21 -0000, "Symon" <symon_brewer@hotmail.com>
>> > wrote:
>> >
>> > >p.s. On the subject of US vs. UK toasting techniques, IMO the problem in the
>> > >US is not the toaster machines, it's the post-toasting technology. The toast
>> > >just gets piled up on a plate. The US seems to be a veritable toast rack
>> > >desert, so the toast always ends up soggy. I guess that's why it's IHOP and
>> > >not IHOT! :-)
>> >
>>  Bob Perlman said:
>>  We are an odd country: as you've suggested, we have very poor
>>  post-toasting technology, but we do have a cereal called Post
>>  Toasties.  No one can explain this.
>>
>> And we drive on parkways and park on driveways.  And we "pre-drill" and
>> "pre-heat". ;)
>> -Dave Pollum
>
>And special people get to "pre-board" an airplane!

then - scary stuff -
 
the pilot announces the plane "will be in the air momentarily"

(yikes! let me off now!)

http://www.askoxford.com/concise_oed/momentarily?view=uk

- Brian

Article: 114137
Subject: Re: ISE 8.2sp3 clobbering source file timestamps?
From: "Lars" <larthe@gmail.com>
Date: 5 Jan 2007 04:44:00 -0800
Links: << >>  << T >>  << A >>

Brian Davis wrote:
> Against my better judgment, I tried using 8.2sp3 again tonight
> by converting and building an older, working 6.3 project.
>
>  I created a new build directory, copied in the old .npl and .ucf
> files, pointed 8.2sp3 at the old .npl project file, said yes to the
> .npl format translation messages, then did a bitstream build.
>
>  When I went back to my external text editor, I was greeted by many
> warnings that my open source files { stored outside the project
> directory, not 'local copies' } had been updated outside the editor.
>
>  Checking further, the file contents matched my backups, but all of
> the source file timestamps had suspiciously been changed to the
> exact same time/date as the creation of the .zip file used by ISE
> for the old project backup, as though ISE had decided to "touch"
> all the source files ( see directory listings below ).
>
> Has anyone else seen this behavior ?
>
> Does this only happen during the import of an old .npl file, or does
> the ISE8.2 project manager decide to clobber the file timestamps
> of all user sources on other special occasions ?
>
> Brian
>
> .\syn_xst_8v2
>   01/04/2007  09:57 PM             2,158 evb_ise7_bak.zip
>
> .\cores\xc_uart
>
>   01/04/2007  09:57 PM            11,705 kcuart_tx.vhd
>   01/04/2007  09:57 PM            10,376 kcuart_rx.vhd
>   01/04/2007  09:57 PM             8,392 bbfifo_16x8.vhd
>   01/04/2007  09:57 PM             5,120 uart_rx.vhd
>   01/04/2007  09:57 PM             5,192 uart_tx.vhd
>
> .\vhdl
>   01/04/2007  09:57 PM             3,302 flip.vhd
>   01/04/2007  09:57 PM            17,751 evb.vhd
>   01/04/2007  09:57 PM             3,190 rf.vhd
>   01/04/2007  09:57 PM            14,408 rom_out.vhd
>   01/04/2007  09:57 PM             3,882 rstack.vhd
>   01/04/2007  09:57 PM            12,260 block_ram.vhd
>   01/04/2007  09:57 PM             1,870 pw2_rom.vhd
>   01/04/2007  09:57 PM             3,234 ffb.vhd
>   01/04/2007  09:57 PM             2,610 bitcnt.vhd
>   01/04/2007  09:57 PM             2,779 y1_config.vhd
>   01/04/2007  09:57 PM            14,536 y1_constants.vhd
>   01/04/2007  09:57 PM            52,444 y1_core.vhd

Maybe a little off topic (I do not know why ISE "touches" the source
files), but in my opinion Xilinx plays tricks on us users in the way
source files are referenced in the project file. I do not have any
experiance with 6.3, but at least in 8.1, you get the option to display
source file paths as absolute or relative. That shure looks fine, but
internally all files are referenced with absolute path! So copying a
project will result in the copy still refering all the original source
files. I like to keep my own versions of entire designs, trying out
different code fixes in different projects, but the only way to do this
is to open the copy of the project, delete all source file references
and add the new ones from scratch. This may be fixed in 8.2, I haven=B4t
had that much experiance with that yet, but it shure make moving or
copying projects to a new location a hazzle...

In your case, I beleive ISE 6.3 still had an ASCII project file, so
making a copy and manually edit the source file references and then
open that copy in 8.2 ought to do the trick! Later versions are
hampered by the binary format project file, cursed by so many users.

/Lars


Article: 114138
Subject: Re: PPC cache errata
From: "cpmetz@googlemail.com" <cpmetz@googlemail.com>
Date: 5 Jan 2007 04:53:30 -0800
Links: << >>  << T >>  << A >>
Well, I can only talk for myself and I'm no hardware engineer. Most
stuff I do is software, and in my job we had the question: are FPGA's a
way to easy build and evaluate different SoC-designs, support some
speedup (e.g. TCP-offloading, pattern recognition on the fly etc.) and
is there a stable process to get these designs. Our first guess was the
EDK/SOPC/whatever way to do things. We found out very quickly that the
included IP doesn't cover all of our needs. Then we wanted to know what
the complexity is to build a own design, something which stresses some
hardware and is easily explained to the higher ranks. The most easy one
is (from my perspective) an adopted GSRD-example: most boards have
ethernet on-board, stacks are freely available etc. We added some
RGMII-support for our ML410 and got it running (using lwip, TCP
offloading etc., except when using ICache...).

I'm sure it is possible to build highly efficient designs in respect to
latency and throughput on the busses, but these designs need some major
experience (like you seem to have :-) ) and don't allow using common
off the shelf IP. I don't want to care about slow IPIF-implementations
or special access-patterns/scheduling of functions to get the most out
of some hw, but I know this wont happen for some time...Nevertheless I
think EDK goes into the right direction, allowing non-EE's like me
using FPGA's for system evaluation.



> The last thing I wonder about is this: Why the fascination with the
> GSRD design as a starting point?  The power of this design comes from
> the fact that it allows a system with a memory bank that is capable of
> providing multiples of the easy to get to 800MB/sec performance of a
> PLB bus, to multiple such sources.  With a x16 DDR configuration, and
> 400Mb/pin, you would only be capable of filling a single PLB bus.  PLB
> bus masters can be extremely compact and easy to implement.  And 70%+
> utilization of the PLB bus is quite easy to achieve with
> quad-doubleword transfers.  Double the burst length and your idle time
> on the bus will drop 2x.


Article: 114139
Subject: Re: PPC cache errata
From: Brian Drummond <brian_drummond@btconnect.com>
Date: Fri, 05 Jan 2007 12:54:47 +0000
Links: << >>  << T >>  << A >>
On 4 Jan 2007 10:53:02 -0800, "Erik Widding" <widding@birger.com> wrote:

>Guru wrote:
...

>> Frustration is what you get when you use PPC. If you want to have
>> little more performance than MB then this route is inevitable.
>
>Respectfully, I suggest you read all of the documentation regarding the
>cache, specifically the initial conditions (i.e. power up, and post
>reset state), and the setup and initialization procedures.

If you're suggesting the OP do this, it would probably be helpful (it
would certainly be helpful to me!) to name or even link the most useful
document(s) or chapter(s). 
It is all too easy to get lost in the documentation - either not even
knowing which one to start with, or thinking "I saw that somewhere,
three months ago ... but where?"

>The last thing I wonder about is this: Why the fascination with the
>GSRD design as a starting point?  
[...]
>If we were to release an application note (or XCell Journal article)
>that described how to architect an efficient coreconnect based system
>with very high throughput, that also included basic VHDL behavioral
>code that implemented the following most basic cores:
>   OPB Slave that can be 32bit read/written
>   OPB Slave (for setup) / PLB Master that does quad-doubleword reads

I would add to this list, the most basic OPB master capable of bursting
data to/from another slave. (Along the lines of the "initiator" in the
PCI ping example)

>Would that help with all of this silliness of new users being drawn to
>these overly complicated reference designs? 

VERY VERY much so!

I spent several unproductive weeks trying to figure out where to start.
(It didn't help that I started from a vendor-specific example design
which didn't port to the EDK version I was using, but that's another
story)

I just assumed from the sheer volume of ready-made interface designs
that there must be some non-obvious difficulty with interfacing directly
to the buses, so the best place to start was one of the ready-made
designs (OPB_IPIF in my case). A bare-metal example proving otherwise
would have been a goldmine.

Even then, deciding which version to use, I picked the "wrong" one.
Newer = better, right? So choose version 3.01 over the (obsolescent?)
version 2.xx. But 2.xx supports bus mastering (but not slave burst
transfers) and 3.01 supports burst transfers but not bus mastering.

As a new user, it was a maze, and there really wasn't the time to get to
grips with it all.

A year later, I will have to re-visit the design sometime, to clear up
the mess (it works, but performance suffers) so even now I'd find this
appnote extremely useful.

I'm going to guess, approx half the difficulty will be in creating the
.mpd, .pao etc hooks to enable its use in EDK?

One vote for the article...

- Brian

Article: 114140
Subject: Re: measure setup and hold time
From: "axr0284" <axr0284@yahoo.com>
Date: 5 Jan 2007 05:17:46 -0800
Links: << >>  << T >>  << A >>
It seems that for the spartan3E, you cannot change the delay in the
input block.
Amish

Lars wrote:
> axr0284 wrote:
> > Hi I am currently working on a RGMII interface using the SPARTAN 3E
> > FPGA.
> >
> > I have 1 clock pin (phy_rx_clk) feeding into a DCM and 2 DCM output
> > clk0 and clk180 being used in my design.
> > There is also an external module which I have no control over that will
> > be sending DDR data and clock with the data have a minimum setup time
> > of 1.4 ns and minimum hold time of 1.2 ns.
> >
> > I need to measure the setup time of the data when it reaches the first
> > flip flop of the DDR which is found in the IOB itself.
> >
> > So I setup the constraint to have 2 ns setup time wrt the input clock
> > called phy_rx_clk
> > Now the timing analyzer tells me that it actually needs a setup time of
> > 3.9 ns and I am wondering why it needs such a long setup time.
> >
> > Wouldn't the DCM introduce some delay in the clock line wrt to the data
> > line thus reducing the setup time.
> >
> > Is there anyway to decrease this setup time to what I need.
> >
> >
> > ------------------------------------------------------------------------------------------------------
> > * COMP "rgmii_rx_ctrl" OFFSET = IN 2 ns BEFORE COMP "phy_rx_clk" HIGH
> >  | Requested  | Actual       | Logic  | Absolute   |Number of Levels
> >  | 2.000ns       | 3.928ns    | 0        | -1.928ns   | 2
> > ------------------------------------------------------------------------------------------------------
> >
> > Thanks for any answer.
> > Amish
>
> If your timing report states a negative value for hold time (and
> assuming that this is not needed), you can trade (at least some) if
> this time into less setup time.
>
> Most likley the tools have inserted some input delay in the IO block to
> assert a negative hold time for your input. This is normally not needed
> when working with DCMs. Check IBUF_DEALY_VALUE and IFD_DELAY_VALUE in
> the constraints guide. A quick check is to open the design in
> FPGA-editor and have a look inside the IO block to see what values have
> been assigned to these parameters. 
> 
> /Lars


Article: 114141
Subject: Re: How to deal with the negative value
From: "Tom J" <tj@pallassystems.com>
Date: 5 Jan 2007 06:13:52 -0800
Links: << >>  << T >>  << A >>

Tom J wrote:

> If you're looking to end up with 8-bit 2's complement integers, then
> you should add 2^7 not 2^8.

Sorry, I was confusing the interger value of the sign bit
post-conversion, with the value of the constant added during the
conversion.  2^8 is right.

As for the rest, if you are starting with integer values, just send the
bits and interpret them as you need.  So I'll assume that you are
starting with floating point values.

Assuming you are using Matlab, I recommend using the fix() function.
This converts float to integer (rounding to zero).

If not, and you have to build your own conversion routine:
First, you must restrict your range to -1<= x <1, not |x|<=1.  Subtle
but possibly important.

Then if xi is the N-bit 2's complement value:
     if (x>=0)   xi = floor( x*2^(N-1) )
     if (x<0)   xi = floor( x*2^(N-1) ) + 2^N

Alternatively, without the conditional (2's complement = offset binary
with MSB flipped)
   xi = bitxor(  2^(N-1),  floor( (x+1) * 2^(N-1) )   );

For improved accuracy, you can round rather than truncate by adding 0.5
to the floor() argument, but then you have to capture the possible
positive result of 2^(N-1) and saturate it to 2^(N-1)-1.  (Note, this
scheme is "rounding to negative infinity".  You can also "round to
zero" or "round to positive infinity".)

Finally, I think I understand your loop to the extent you are trying to
strip off the bits, perhaps to send them serially; however, you seem to
be working with the floating point value, x.  I would recommend to
convert to the integer value, xi, first, and then strip off the bits.

   for k=1:N
       b(k) = bitand(xi,1);
       xi = bitshift(xi,-1);
   end

Tom


Article: 114142
Subject: Re: Spartan3E minimum clock-to-output (hold time)
From: "Bob" <nimby_NEEDSPAM@roadrunner.com>
Date: Fri, 5 Jan 2007 07:13:13 -0800
Links: << >>  << T >>  << A >>

"Lars" <larthe@gmail.com> wrote in message 
news:1167990963.608936.301440@42g2000cwt.googlegroups.com...
>
> Lars wrote:
>> I've been looking all over but have been unable to find any numbers for
>> minimum clock-to-output propagation delay for Spartan3E. Anyone have a
>> clue as to what can be expected? The design is very straight forward: A
>> clock inut to a GCLK pin, a BUFG clock driver clocking an IO-block data
>> and output enable flip-flop. Maximum clock-to-output delay is reported
>> at some 6.8 ns. This matches fine the data in the Spartan3E data sheet
>> tables 85, 90 and 93 (5.51 ns + 0.43 ns + 0.70 ns for LVTTL clock input
>> and LVTTL 12 mA SLOW output).
>>
>> But how can I find the minimum delay to satisfy the external component
>> hold time requirement of 1.5 ns? My reasoning that the fetch loop
>> clock-pad -> clock driver -> output ff -> output pad "ought" to be more
>> than 1.5 ns under all conditions, but it would shure feel better if I
>> could have it in writing!
>>
>> /Lars
>
> It seems that i (once again) didn't do enough of STFW before I posted.
> There is a thread (Best Case Timing Parameters) covering this topic.
> The conclusion seems to be to use 1/4 of the max value, except when
> using DCMs, where this will be too optimistic. In my case, I seem to be
> OK following this rule.
>
> I still think there ought to be some data in the specs about this, as
> many external components (SDRAMs, DSPs etc.) require a non-zero hold
> time.
>
> /Lars
>

Lars,

We had been given a similar number by Xilinx. We, too, thought that they 
meant that the minimum clock-to-out would be 25% of the maximum. However, 
when we were discussing this recently with them they said, "No, we mean a 
25% decrease in clock-to-out  with respect to the maximum."

So, now I'm confused, again.

Bob



Article: 114143
Subject: Re: Spartan3E minimum clock-to-output (hold time)
From: "Lars" <larthe@gmail.com>
Date: 5 Jan 2007 07:56:03 -0800
Links: << >>  << T >>  << A >>

Bob wrote:
> "Lars" <larthe@gmail.com> wrote in message
> news:1167990963.608936.301440@42g2000cwt.googlegroups.com...
> >
> > Lars wrote:
> >> I've been looking all over but have been unable to find any numbers for
> >> minimum clock-to-output propagation delay for Spartan3E. Anyone have a
> >> clue as to what can be expected? The design is very straight forward: A
> >> clock inut to a GCLK pin, a BUFG clock driver clocking an IO-block data
> >> and output enable flip-flop. Maximum clock-to-output delay is reported
> >> at some 6.8 ns. This matches fine the data in the Spartan3E data sheet
> >> tables 85, 90 and 93 (5.51 ns + 0.43 ns + 0.70 ns for LVTTL clock input
> >> and LVTTL 12 mA SLOW output).
> >>
> >> But how can I find the minimum delay to satisfy the external component
> >> hold time requirement of 1.5 ns? My reasoning that the fetch loop
> >> clock-pad -> clock driver -> output ff -> output pad "ought" to be more
> >> than 1.5 ns under all conditions, but it would shure feel better if I
> >> could have it in writing!
> >>
> >> /Lars
> >
> > It seems that i (once again) didn't do enough of STFW before I posted.
> > There is a thread (Best Case Timing Parameters) covering this topic.
> > The conclusion seems to be to use 1/4 of the max value, except when
> > using DCMs, where this will be too optimistic. In my case, I seem to be
> > OK following this rule.
> >
> > I still think there ought to be some data in the specs about this, as
> > many external components (SDRAMs, DSPs etc.) require a non-zero hold
> > time.
> >
> > /Lars
> >
>
> Lars,
>
> We had been given a similar number by Xilinx. We, too, thought that they
> meant that the minimum clock-to-out would be 25% of the maximum. However,
> when we were discussing this recently with them they said, "No, we mean a
> 25% decrease in clock-to-out  with respect to the maximum."
>
> So, now I'm confused, again.
>
> Bob

Well, anyone stating that the delay would only vary between 75% and
100% of the maximum over process, temperature and voltage would get me
suspicious... From my previous work with ASICs i recall a huge spectrum
of delay from min to max, so 25%-100% seems much more plausible to me.
That also seems to be the conclusion of the other post i found, and
fits my original "gut feeling" guess, so I beleive I will stick with
that.

/Lars


Article: 114144
Subject: Re: ISE 8.2sp3 clobbering source file timestamps?
From: "Brian Davis" <brimdavis@aol.com>
Date: 5 Jan 2007 08:04:57 -0800
Links: << >>  << T >>  << A >>
Lars wrote:
>
> in 8.1, you get the option to display source file paths as absolute or relative.
> That shure looks fine, but internally all files are referenced with absolute path!
> So copying a project will result in the copy still refering all the original source files.
>
 Yes, those absolute paths are a pain!
 IIRC, there are similar problems with coregen project paths.

 There's a description of the changes made to relative/absolute path
behavior for project files in Answer Record 23415.

  If I'm reading #23415 correctly:

   - 8.2 always stores absolute paths, but if the project directory
     has changed it first looks for the files in the same relative
     location to the project file, then checks the complete absolute
     path second

   - 8.1sp03 checks the absolute path first when the project moves

 I'd expect that allowing relative paths to be stored in the
project file, as in the older versions of ISE, would be far
simpler and safer!!!

>
> In your case, I believe ISE 6.3 still had an ASCII project file,
> so making a copy and manually edit the source file references
> and then open that copy in 8.2 ought to do the trick!
>
 My 6.3 .npl file used relative paths, and it imported the sources
into the project OK when I moved the .npl to a new build directory;
but ISE touched all the file timestamps in the process.

Brian


Article: 114145
Subject: Re: Spartan3E minimum clock-to-output (hold time)
From: "John_H" <newsgroup@johnhandwork.com>
Date: Fri, 5 Jan 2007 08:17:03 -0800
Links: << >>  << T >>  << A >>
"Lars" <larthe@gmail.com> wrote in message 
news:1168012563.296810.141590@q40g2000cwq.googlegroups.com...
>
<snip>
>
> Well, anyone stating that the delay would only vary between 75% and
> 100% of the maximum over process, temperature and voltage would get me
> suspicious... From my previous work with ASICs i recall a huge spectrum
> of delay from min to max, so 25%-100% seems much more plausible to me.
> That also seems to be the conclusion of the other post i found, and
> fits my original "gut feeling" guess, so I beleive I will stick with
> that.
>
> /Lars

Additionally, the 25% quoted value has been offered as 25% of the fastest 
speed grade, not necessarily the speed grade used.  I've seen 40% used 
elsewhere for other things.

There's also for some families - and I think the Spartan-3E got this status 
within the last several months - there's an option in Timing Analizer for a 
speed grade of "minimum" which will give the guaranteed minimum propagation 
delay values for the device.  I'd suggest running Timing Analyzer on your 
design with the "minimum" speed grade option to see if the numbers come up 
with the appropriate values.  From here you could decide if the DCM fixed 
phase shift is warranted.

- John_H 



Article: 114146
Subject: Re: Spartan3E minimum clock-to-output (hold time)
From: nico@puntnl.niks (Nico Coesel)
Date: Fri, 05 Jan 2007 17:19:14 GMT
Links: << >>  << T >>  << A >>
"Lars" <larthe@gmail.com> wrote:

>I've been looking all over but have been unable to find any numbers for
>minimum clock-to-output propagation delay for Spartan3E. Anyone have a
>clue as to what can be expected. The design is very straight forward: A
>clock inut to a GCLK pin, a BUFG clock driver clocking an IO-block data
>and output enable flip-flop. Maximum clock-to-output delay is reported
>at some 6.8 ns. This matches fine the data in the Spartan3E data sheet
>tables 85, 90 and 93 (5.51 ns + 0.43 ns + 0.70 ns for LVTTL clock input
>and LVTTL 12 mA SLOW output).
>
>But how can I find the minimum delay to satisfy the external component
>hold time requirement of 1.5 ns? My reasoning that the fetch loop
>clock-pad -> clock driver -> output ff -> output pad "ought" to be more
>than 1.5 ns under all conditions, but it would shure feel better if I
>could have it in writing!

I have been looking for this also. Best thing to do is use the timing
analyzer on an existing design. The minimum clock to output delay is
used to calculate (IIRC) the hold timing.

-- 
Reply to nico@nctdevpuntnl (punt=.)
Bedrijven en winkels vindt U op www.adresboekje.nl

Article: 114147
Subject: Re: DC timing violation, what to do first?
From: "Mike Lewis" <someone@micrsoft.com>
Date: Fri, 5 Jan 2007 12:58:53 -0500
Links: << >>  << T >>  << A >>
The tool doesn't know about multi-cycle or false path .. you have to tell 
the tool through your constraints.

Mike

"Davy" <zhushenli@gmail.com> wrote in message 
news:1167965760.337415.269270@i15g2000cwa.googlegroups.com...
Hi Mike,

Thanks a lot!
I am also confused with your explaination. What cause the tool think
there is a false path or multi-cycle path? IMHO, tools are the first
important thing to believe?

Best regards,
Shenli

Mike Lewis wrote:
> A false path is not a real path ... the tool does not know what are real
> paths and what are not .. it just reports the longest path.
>
> A multi-cycle path is a path than can take more than one clock. The design
> may be such that a signal is not going to be used for a few clocks after 
> it
> comes out of a FF. You can then tell the tool that this signal has more 
> time
> to propagate using a multi-cycle constraint.
>
> Mike
>
>
>
> "Davy" <zhushenli@gmail.com> wrote in message
> news:1167909771.705282.273860@11g2000cwr.googlegroups.com...
> Hi Jerome,
>
> Thanks a lot!
> Can you tell me what's false path and multicycle path mean?
>
> Best regards,
> Shenli
>
> Jerome wrote:
> > Hi,
> >
> > i will just describe what i do:
> >
> > First, try to understand the violation:
> >   is this a real one?
> >     if not, add either a false path or a multicycle path or redefine
> > the timing definition in order to remove this false violation
> > if yes, change the RTL
> > regards.
> >
> >
> > Davy a écrit :
> > > Hi all,
> > >
> > > I am new to Synopsys DC. And I have a basic problem. When I find 
> > > timing
> > > violation in DC report, what shall I do first?
> > >
> > > 1. Shall I change the script of DC? To let the tools do something like
> > > retiming?
> > > 2. Shall I change the RTL code? To pipeline the comb logic manually?
> > > 3. Other choice, please recommend.
> > >
> > > What circumstance to do "1" or "2" or "1 and 2 at the same time"?
> > >
> > > Best regards,
> > > Shenli
> > >



Article: 114148
Subject: Re: Virtex 4 FIFO question
From: "Brad Smallridge" <bradsmallridge@dslextreme.com>
Date: Fri, 5 Jan 2007 10:08:41 -0800
Links: << >>  << T >>  << A >>
That's not my recollection.  It seems to me as soon as you do your
first write enable, data will be available at the output some
clock cycles later. It's better to think of the RD_EN as a GET_NEXT
signal, rather than a READ.

Brad Smallridge
AiVision

"John" <null@null.com> wrote in message news:eea1578.-1@webx.sUN8CHnE...
> The read and write signals have to be set one clock cycle before the 
> action right? If I do the following, will it read the first FIFO byte?
>
> ... RD_EN <= '1'; TEST <= FIFO_OUT; ... 



Article: 114149
Subject: Re: Spartan3E minimum clock-to-output (hold time)
From: "Peter Alfke" <peter@xilinx.com>
Date: 5 Jan 2007 10:57:52 -0800
Links: << >>  << T >>  << A >>
The accepted definition is: 25% of the max value as it is specified for
the fastest speed grade.
(If the max delay for the fastest speed grad is 4 ns, then the min
value of that parameter for all speed grades is "no shorter than 1.0
ns")
Assuming min to be "75% of max" is utter nonsense.
Why the wide range?
It covers temperature changes, Vcc changes, and processing variations,
and also testing guardbands. All of this in the worst-case direction.
Also; While max delays are tested, min delays are usually not testable,
and must be "guaranteed by design, or by characterization".
Luckily, a synchronous design is not concerned about min delays inside
the chip.
Memory interfaces often are sensitive, and ask for careful and creative
 design methodologies.
Peter Alfke, Xilinx Applications


On Jan 5, 9:19 am, n...@puntnl.niks (Nico Coesel) wrote:
> "Lars" <lar...@gmail.com> wrote:
> >I've been looking all over but have been unable to find any numbers for
> >minimum clock-to-output propagation delay for Spartan3E. Anyone have a
> >clue as to what can be expected. The design is very straight forward: A
> >clock inut to a GCLK pin, a BUFG clock driver clocking an IO-block data
> >and output enable flip-flop. Maximum clock-to-output delay is reported
> >at some 6.8 ns. This matches fine the data in the Spartan3E data sheet
> >tables 85, 90 and 93 (5.51 ns + 0.43 ns + 0.70 ns for LVTTL clock input
> >and LVTTL 12 mA SLOW output).
>
> >But how can I find the minimum delay to satisfy the external component
> >hold time requirement of 1.5 ns? My reasoning that the fetch loop
> >clock-pad -> clock driver -> output ff -> output pad "ought" to be more
> >than 1.5 ns under all conditions, but it would shure feel better if I
> >could have it in writing!I have been looking for this also. Best thing to do is use the timing
> analyzer on an existing design. The minimum clock to output delay is
> used to calculate (IIRC) the hold timing.
>
> --
> Reply to nico@nctdevpuntnl (punt=.)
> Bedrijven en winkels vindt U opwww.adresboekje.nl




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search