Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 119225

Article: 119225
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Sean Durkin <news_may07@durkin.de>
Date: Tue, 15 May 2007 15:49:20 +0200
Links: << >>  << T >>  << A >>
Jonathan Bromley wrote:
> Is there a memory expert in the house? :-)
Not exactly an expert, but I'm trying to figure out the same thing at
the moment.

This is what Micron is saying on the subject on their website
(http://www.micron.com/support/designsupport/faq/ddr2):

"Q: Will the device run at a slow clock (well under the slowest data
sheet speed)?
A:  For a READ operation, the DRAM edge-aligns the strobe(s) with the
data. Most controllers sense the strobe to determine where the data
window is positioned. This fine strobe/data alignment requires that each
DRAM have an internal DLL. The DLL is tuned to operate for a finite
frequency range, which is identified in each DRAM data sheet. Running
the DRAM outside these specified limits may cause the DLL to become
unpredictable.

The DRAM is tested to operate within the data sheet limits. Micron does
not suggest or guarantee DRAM operation outside these predefined limits."

So the problem is the DLL inside the DRAM chips. It depends on how they
implemented their DLL. If you have slow clock speeds, you might not care
about the phase relationship, so you could just turn off the DLL. That's
covered in the FAQ as well:

"Q: Should the DLL be disabled?
A: Although in some cases the DRAM may work with the DLL off, this mode
of operation is not documented nor supported by JEDEC. Therefore, each
DRAM design may behave differently when configured to run with the DLL
disabled. Micron does not support or guarantee operation with the DLL
disabled. Running the DRAM with the DLL disabled may cause the device to
malfunction and/or violate some DRAM output timing specifications."

So: It might work, or it might not...

I've had *VERY* bad expiriences with using DRAM chips out of spec.
Samsung is especially annoying about this. You might run the chips a
tiny little bit out of spec for some reason, and everything works just
fine for years. Then suddenly a new die revision appears, and from one
shipment to another your designs stops working. And you can't even blame
them, because it was never specified for what you were doing.

Or they just change the spec a little bit. Last year I had to fix a
6-year old design because Samsung decided to drop support for full page
burst reads from their SDR SDRAM chips in the latest die revision. So
the latest shipment of the product simply didn't work anymore.
Took me awhile to find a copy of ISE4 to even get that old design to
synthesize...

So if you're designing a product that might be around for awhile, I
strongly suggest to stick *very* closely to all the DRAM specs. Or test
with *one* type of chip and buy a whole bunch of those to support the
product's entire lifetime.

-- 
My email address is only valid until the end of the month.
Try figuring out what the address is going to be after that...

Article: 119226
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Tim <tim@nooospam.roockyloogic.com>
Date: Tue, 15 May 2007 14:49:31 +0100
Links: << >>  << T >>  << A >>
Sean Durkin wrote:
> 125MHz is the lowest specified clock rate for DDR2 SDRAM, not 100MHz.
> 
> The problem is that inside the DRAM chip there is a DLL, that makes sure
> the data is output edge aligned with the data strobe. This DLL only
> works for a specific frequency range, usually down to 125MHz. In the
> JEDEC spec it is specified that the DLL must work down to 125MHz, but
> does not need to work for frequencies below that.

There are readers of c.a.f who have designed a DLL; I'm not one of them.

But if the problem is the limited length of the DLL delay chain, could 
one run the memory flat out for a while until it is hot and everything 
slows down?

--
Tim

Article: 119227
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: johnp <johnp3+nospam@probo.com>
Date: 15 May 2007 06:50:20 -0700
Links: << >>  << T >>  << A >>
On May 15, 6:28 am, Sean Durkin <news_ma...@durkin.de> wrote:
> Amit wrote:
> > Hello,
>
> > I have a DDR2 Controller ASIC rtl, which i need to put on FPGA and
> > validate it. The problem is, i am not able to get this controller run
> > on more than 50-60 Mhz on Virtex4 FPGA. Now, as everyone says minimum
> > clock frequency for DDR2 devices is 100Mhz, i am simply not able to
> > get DDR2 interface working on FPGA.
>
> > I want to know if the DDR2 Devices can work on the clock frequencies
> > which are much lower than 100Mhz. Has anyone tried it and got any
> > success?
>
> 125MHz is the lowest specified clock rate for DDR2 SDRAM, not 100MHz.
>
> The problem is that inside the DRAM chip there is a DLL, that makes sure
> the data is output edge aligned with the data strobe. This DLL only
> works for a specific frequency range, usually down to 125MHz. In the
> JEDEC spec it is specified that the DLL must work down to 125MHz, but
> does not need to work for frequencies below that.
>
> So basically this means that maybe it works, maybe it doesn't. You could
> be lucky and have DRAM chips that support it, but you can't count on it.
> It might work with one shipment of chips, but might not with another. It
> may vary from manufacturer to manufacturer, and from die revision to die
> revision. So even though technically slower clock speeds should be
> possible, this is just something that is out of spec, and even if it
> happens to work the functionality might go away at any time.
>
> So in your case if you try it and it doesn't work, you can never be sure
> if the problem is with the IP core or the DRAM chip...
>
> --
> My email address is only valid until the end of the month.
> Try figuring out what the address is going to be after that...


I haven't looked at DDR2 in a while, but I seem to recall there is a
way to turn
off the DLL if you are willing to run a much lower frequency than
would normally
be used.  I don't know if I've ever seen what "much lower" actually
is, but the
50MHz might be in that range.

John Providenza


Article: 119228
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Antti <Antti.Lukats@googlemail.com>
Date: 15 May 2007 06:57:52 -0700
Links: << >>  << T >>  << A >>
On 15 Mai, 15:50, johnp <johnp3+nos...@probo.com> wrote:
> On May 15, 6:28 am, Sean Durkin <news_ma...@durkin.de> wrote:
>
>
>
>
>
> > Amit wrote:
> > > Hello,
>
> > > I have a DDR2 Controller ASIC rtl, which i need to put on FPGA and
> > > validate it. The problem is, i am not able to get this controller run
> > > on more than 50-60 Mhz on Virtex4 FPGA. Now, as everyone says minimum
> > > clock frequency for DDR2 devices is 100Mhz, i am simply not able to
> > > get DDR2 interface working on FPGA.
>
> > > I want to know if the DDR2 Devices can work on the clock frequencies
> > > which are much lower than 100Mhz. Has anyone tried it and got any
> > > success?
>
> > 125MHz is the lowest specified clock rate for DDR2 SDRAM, not 100MHz.
>
> > The problem is that inside the DRAM chip there is a DLL, that makes sure
> > the data is output edge aligned with the data strobe. This DLL only
> > works for a specific frequency range, usually down to 125MHz. In the
> > JEDEC spec it is specified that the DLL must work down to 125MHz, but
> > does not need to work for frequencies below that.
>
> > So basically this means that maybe it works, maybe it doesn't. You could
> > be lucky and have DRAM chips that support it, but you can't count on it.
> > It might work with one shipment of chips, but might not with another. It
> > may vary from manufacturer to manufacturer, and from die revision to die
> > revision. So even though technically slower clock speeds should be
> > possible, this is just something that is out of spec, and even if it
> > happens to work the functionality might go away at any time.
>
> > So in your case if you try it and it doesn't work, you can never be sure
> > if the problem is with the IP core or the DRAM chip...
>
> > --
> > My email address is only valid until the end of the month.
> > Try figuring out what the address is going to be after that...
>
> I haven't looked at DDR2 in a while, but I seem to recall there is a
> way to turn
> off the DLL if you are willing to run a much lower frequency than
> would normally
> be used.  I don't know if I've ever seen what "much lower" actually
> is, but the
> 50MHz might be in that range.
>
> John Providenza- Zitierten Text ausblenden -
>
> - Zitierten Text anzeigen -

if you turn the DLL off then you cant really validate the DDR2 IP-core
functionality under normal conditions.
(DLL OFF is not normal operating mode)
those for the asic core validation, the validation FPGA test FPGA must
run at DDR2 supported clock.

Antti







Article: 119229
Subject: Re: reading IDCODE from parallel bus?
From: Gabor <gabor@alacron.com>
Date: 15 May 2007 06:57:59 -0700
Links: << >>  << T >>  << A >>
On May 15, 6:07 am, "Morten Leikvoll" <mleik...@yahoo.nospam> wrote:
> Is there any way to read IDCODE (and execute other jtag commands) using the
> parallel config bus?
> I can't find any information on this, mostly because of polluted results.
>
> Thanks


You didn't mention which chip you are using, but I would
be surprised if this function is available.  Generally JTAG
can access the configuration logic of the FPGA device,
but not the other way around.  What are you trying to do,
and why not just use the JTAG pins to do it?


Article: 119230
Subject: Re: reading IDCODE from parallel bus?
From: Antti <Antti.Lukats@googlemail.com>
Date: 15 May 2007 07:02:51 -0700
Links: << >>  << T >>  << A >>
On 15 Mai, 15:57, Gabor <g...@alacron.com> wrote:
> On May 15, 6:07 am, "Morten Leikvoll" <mleik...@yahoo.nospam> wrote:
>
> > Is there any way to read IDCODE (and execute other jtag commands) using the
> > parallel config bus?
> > I can't find any information on this, mostly because of polluted results.
>
> > Thanks
>
> You didn't mention which chip you are using, but I would
> be surprised if this function is available.  Generally JTAG
> can access the configuration logic of the FPGA device,
> but not the other way around.  What are you trying to do,
> and why not just use the JTAG pins to do it?

actually the function is available for Xilinx slavemap,
you read and write configuration frames over parallel interface.
its not the same as JTAG, and you can do any JTAG commands
but configuration registers are accessible. just look in xilinx
documentation for more information

Antti


Article: 119231
Subject: Re: coregen -> simulation error in modelsim
From: Brian Drummond <brian_drummond@btconnect.com>
Date: Tue, 15 May 2007 15:14:00 +0100
Links: << >>  << T >>  << A >>
On 15 May 2007 04:06:17 -0700, kislo <kislo02@student.sdu.dk> wrote:

>When i try to simulate a coregen generated single port ram, i get a
>error from modelsim :
>
># -- Loading package blkmemsp_pkg_v6_2
># -- Loading entity blkmemsp_v6_2
># ** Error: ram.vhd(112): Internal error: ../../../src/vcom/
>genexpr.c(5483)
># ** Error: ram.vhd(112): VHDL Compiler exiting

>from google search i found a guy having the same problem with another
>coregen component:
>http://www.mikrocontroller.net/topic/68567
>he says:
>
>"jedenfalls war das problem, dass die generics nur im mapping
>aufgeführt
>waren und nicht im deklarationsteil der architecture des von coregen
>generierten files. das sollte, nein das muss man von hand ändern und
>alles ist gut :o)"
>
>what is it exatly i am supposed to do to get it to work ?

Not exactly but approximately...

"In every case the problem was, that the generics were only <expressed?>
in the mapping, and not in the declaration part of the architecture in
the files generated by Coregen. That should be, no must be, altered by
hand, and all is well"

So look for missing generics in the Coregen wrapper files, as a starting
point. If there are discrepancies between them and the mapping
(component instantiation?), fix by hand.

- Brian


Article: 119232
Subject: Re: Xilinx EDK: Slow OPB write speeds
From: Antti <Antti.Lukats@googlemail.com>
Date: 15 May 2007 07:15:04 -0700
Links: << >>  << T >>  << A >>
On 15 Mai, 15:04, Andrew Greensted <ajg...@ohm.york.ac.uk> wrote:
> It struck me that part of the speed problem with the PPC based system
> was having the main system memory on the PLB bus. By using the OCM
> interface for data and instructions things got slightly faster:
>
> Virtex2Pro + PPC
> cpu Freq: 100Mhz, bus Freq: 100Mhz
> memory write freq about 2.941MHz
> 1 write / 340ns
>
> However, this still seems very slow for a 100MHz bus.
>
> Andy

xilinx memory IP core random access delay may easily be up to 20 clock
cycles.
this would then be 5MHz, you are seing 2.9MHz, this is a bit lower but
adding
some GCC compiler overhead and we come down to the speed you are
seeing.

it all depends how efficient the memory controller and bus arbitration
really is.
in some cases the memory performance can be rather low.

Antti


Article: 119233
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Sean Durkin <news_may07@durkin.de>
Date: Tue, 15 May 2007 16:17:57 +0200
Links: << >>  << T >>  << A >>
johnp wrote:
> I haven't looked at DDR2 in a while, but I seem to recall there is a
> way to turn
> off the DLL if you are willing to run a much lower frequency than
> would normally
> be used.  I don't know if I've ever seen what "much lower" actually
> is, but the
> 50MHz might be in that range.
Yes, you can turn it off, it's just a bit in the mode register. But, as
I said in another message in this thread, this mode is not officially
supported by any of the DRAM vendors. So maybe it works, maybe it doesn't.

In any case it means running the DRAM out of spec, and that's not really
a good way to validate a DRAM controller core, as Antti said.

-- 
My email address is only valid until the end of the month.
Try figuring out what the address is going to be after that...

Article: 119234
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Brian Drummond <brian_drummond@btconnect.com>
Date: Tue, 15 May 2007 15:22:32 +0100
Links: << >>  << T >>  << A >>
On 15 May 2007 04:34:28 -0700, Amit <amit3007@gmail.com> wrote:

>Hello,
>
>I have a DDR2 Controller ASIC rtl, which i need to put on FPGA and
>validate it. The problem is, i am not able to get this controller run
>on more than 50-60 Mhz on Virtex4 FPGA. Now, as everyone says minimum
>clock frequency for DDR2 devices is 100Mhz, i am simply not able to
>get DDR2 interface working on FPGA.
>
>I want to know if the DDR2 Devices can work on the clock frequencies
>which are much lower than 100Mhz. Has anyone tried it and got any
>success?
>
>please put your insights.

Look for a thread here a few months ago about DDR or DDR2 "DLL
frequency"; there was some discussion;  apparently there is a DLL (for
tracking strobe timings) in the DDR[2] memory which runs out of
adjustment range at low frequencies. 

It is apparently possible (by changing the mode register initialisation
parameters) to turn the DLL OFF, and run the memory without it. 

In which case you presumably have to take more care of the strobe
timings yourself, (e.g. provide phase shift adjustment in the DCM) but
if you can live with this, it may let you work outside the normal
frequency range.

And no, I haven't actually tried it...
- Brian


Article: 119235
Subject: Re: Power Consumption Estimation for PCI card, any advice?
From: austin <austin@xilinx.com>
Date: Tue, 15 May 2007 07:50:44 -0700
Links: << >>  << T >>  << A >>
Paul,

Yawn.

I am not playing your game.  OK?  You already lost.  No 65nm for ONE
YEAR.  Now that you are (?about) to roll out S3, we see it has triple
oxide, and all of the neat things we did to save power (plus a few more
of dubious value).

And, with all of that?  It is certainly better than S2 (funny how to
sell S3 you have to admit how S2 was so bad).

But, it is not better than V5 (at least, one can't make that claim until
you actually have one to test, and you know what your process variations
are, and you are ONE YEAR behind, and counting!).

And, did I mention that you are ONE YEAR behind in 65nm?

Thanks, we really appreciate having had no competition for a year (and
counting).

Keep up the good work.

I apologize to the newsgroup, but even when faced with the facts, I hope
it is not lost on anyone how the story on S2 changed suddenly from
"great low power, excellent estimator" to "it is twice as bad as the S3,
because it doesn't have triple oxide, etc."

Oh, and the constant boring claims that our estimator is somehow flawed,
and theirs is so much better.  Better at what?

Austin

Paul Leventis wrote:
> Hi Austin,
> 
> I'm sure the readers of this newsgroup are shocked to hear that
> companies try to draw attention to their own strengths and point out
> competitor's weaknesses :-). Yes, we focus a lot on power, power
> analysis, and power estimation for the reason you suggest -- we
> believe we have clear advantages in this area, and we think power is
> an important criteria of our customers when selecting devices.
> 
>> draw attention away from the areas where they do not
>> excel, and into an area where no one can prove anything!
> 
> Open up you Virtex-5 estimator.  Type in 20000 FFs.  Set the toggle
> rate to 10% and clock to 200 Mhz.  Then change the toggle rate to
> 20%.  No change in power despite a change in switching activity?  For
> that matter, where is the clock tree power?  I guess bugs are expected
> in software, but these are fairly egregious omissions.  Perhaps the
> tool is accurate for designs without FFs and clocks, but somehow I
> don't think that's a lot of your target market ;-)
> 
>> So, is Altera's power estimator that accurate?
> 
> Yes.  All your deflections aside, I have yet to see anything from you
> to refute the accuracy of our estimators.
> 
>> I would give the estimate a 20% bump for what it might actually be in
>> practice.  Any one unit will be under the estimate.  Only a fast corner
>> processed part which is shipped to fill the order will come in at the
>> high end of the estimate.  Since you can not only order "typical" parts,
>> the additional margin is absolutely necessary.
> 
> This is a fairly confusing statement.  I'm not sure whether you are
> talking about dynamic or static power or both.
> 
> If the power tool a customer is using does not provide "Maximum"
> silicon characteristics, then yes, they need to bump up the *static
> power* portion of the estimate.  How much depends on many factors, but
> I believe Austin's own advice in the past has been 2X from typical to
> maximum, and in the absence of any other information, 2X is actual not
> a bad guess.  However, if the tool they are using has maximum or worst-
> case specs available, then there is no need to guardband -- the
> estimates already reflect the fastest they will receive.  Of course,
> the junction temperature must be representative of the hottest
> conditions the chip will be operating in, since temperature also has a
> large impact on static power consumtion.
> 
> When it comes to dynamic power, process variation does not have a
> strong impact.  When you select Maximum characteristics in the EPE/
> Quartus, we do adjust dynamic power slightly to account for worst-case
> dynamic power we see on corner units.  But this variation is small (a
> few percent).  Metal cap can vary, but it varies independently between
> metal layers (resulting in an averaging effect).  Faster transistors
> have very little impact on dynamic power, since the capacitance that
> needs to be charged is still the same, and the short-circuit current
> becomes shorter (transistor switches faster) but more intense
> (transistor pull current faster), resulting in little change in
> dynamic power for most circuits.
> 
>> We also
>> demonstrated how the same design in both chips led to a 15 to 20 degree
>> C power savings in V4.
> 
> I am not familiar with the details of this particular design.  Is it
> the same one you were showing customers that had some of our I/Os
> shorted to power?  That was a nifty trick.  Very dramatic demo.
> 
> Regardless, there will always be designs that work well on one chip
> vs. the other.  We also have demonstrated a number of designs to
> customers and in various NetSeminars.  Short of making our HDL &
> designs open to one another for critique, I doubt we will ever get
> agreement (or complete buy-in from our audiences) on our dynamic power
> demonstrations.
> 
> But what is more important in these demos is how good is estimator
> accuracy?  At end of the day, its not only important which device
> consumes lower power (I think ours do, you think yours do), but can
> the customer figure out what that power will be early in the design
> cycle?  Can they measure it or profile it accurately during design?
> Can they optimize it with the push of a button in the CAD tools as
> they can with Quartus?
> 
> Paul
> 

Article: 119236
Subject: Re: Timing constraint question
From: Dima <Dmitriy.Bekker@gmail.com>
Date: 15 May 2007 08:14:28 -0700
Links: << >>  << T >>  << A >>
Veeresh,

> I suppose you have two outputs  a clk out, and one more signal i.e,
> control signal. And the edge on control signal has to be kept at a
> time gap from rising edge of the clock. If control signal is generated
> w.r.t same clock, clk to output delay constraint can be used.
> Otherwise sample this signal again with the same clock, and use clk to
> o/p delay constraint.

Yes, that is what my circuit looks like. Can you show me how I would
specify this constraint? I googled it but didn't find particular
information on it.
Right now I have this in my UCF. I constrained the latch delay to 4.9
ns, just a bit under 5 ns clock (200 MHz).

######
Net "*/my_core/CLK" TNM_NET = dp_clk;
TIMEGRP "RISING_DP_CLK" = RISING dp_clk;
TIMEGRP "DP_LATCH" = LATCHES("*/my_core/my_core/accumulate<*>");
TIMESPEC TS_DP_LATCH = FROM "RISING_DP_CLK" TO "DP_LATCH" 4.9 ns
DATAPATHONLY;
######

Thanks

Dmitriy


Article: 119237
Subject: Re: Xilinx EDK: Slow OPB write speeds
From: Andrew Greensted <ajg112@ohm.york.ac.uk>
Date: Tue, 15 May 2007 16:19:01 +0100
Links: << >>  << T >>  << A >>
Antti wrote:

> it all depends how efficient the memory controller and bus arbitration
> really is.
> in some cases the memory performance can be rather low.
> 
> Antti
> 

Antti, thanks for the input. I realise this is a bit difficult to answer
without some real specific peripheral info, but can you suggest a faster
method of interfacing a peripheral?

I guess PLB is faster, but this will limit it to PPC use, or microblaze
via a bus bridge, but I guess that will be slower still.

Is some kind of DMA approach the only way to improve transfer rates?

Thanks
Andy

Article: 119238
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: "ALuPin@web.de" <ALuPin@web.de>
Date: 15 May 2007 08:19:03 -0700
Links: << >>  << T >>  << A >>
Theoretically you can change a bit in the mode register to turn
off the DLL.

But who does guarantee that the mode register
accesses will be successful when running the DDR2
memory at lower speeds than 125 MHz ?

Did you find out what is holding your Fmax down
in your FPGA ?


Rgds
Andre


Article: 119239
Subject: Re: Power Consumption Estimation for PCI card, any advice?
From: "John_H" <newsgroup@johnhandwork.com>
Date: Tue, 15 May 2007 08:38:40 -0700
Links: << >>  << T >>  << A >>
Yawn.


"austin" <austin@xilinx.com> wrote in message 
news:f2chc4$ba41@cnn.xsj.xilinx.com...
> Paul,
>
> Yawn.
>
> I am not playing your game.  OK?  You already lost.  No 65nm for ONE
> YEAR.  Now that you are (?about) to roll out S3, we see it has triple
> oxide, and all of the neat things we did to save power (plus a few more
> of dubious value).
>
> And, with all of that?  It is certainly better than S2 (funny how to
> sell S3 you have to admit how S2 was so bad).
>
> But, it is not better than V5 (at least, one can't make that claim until
> you actually have one to test, and you know what your process variations
> are, and you are ONE YEAR behind, and counting!).
>
> And, did I mention that you are ONE YEAR behind in 65nm?
>
> Thanks, we really appreciate having had no competition for a year (and
> counting).
>
> Keep up the good work.
>
> I apologize to the newsgroup, but even when faced with the facts, I hope
> it is not lost on anyone how the story on S2 changed suddenly from
> "great low power, excellent estimator" to "it is twice as bad as the S3,
> because it doesn't have triple oxide, etc."
>
> Oh, and the constant boring claims that our estimator is somehow flawed,
> and theirs is so much better.  Better at what?
>
> Austin
>
> Paul Leventis wrote:
>> Hi Austin,
>>
>> I'm sure the readers of this newsgroup are shocked to hear that
>> companies try to draw attention to their own strengths and point out
>> competitor's weaknesses :-). Yes, we focus a lot on power, power
>> analysis, and power estimation for the reason you suggest -- we
>> believe we have clear advantages in this area, and we think power is
>> an important criteria of our customers when selecting devices.
>>
>>> draw attention away from the areas where they do not
>>> excel, and into an area where no one can prove anything!
>>
>> Open up you Virtex-5 estimator.  Type in 20000 FFs.  Set the toggle
>> rate to 10% and clock to 200 Mhz.  Then change the toggle rate to
>> 20%.  No change in power despite a change in switching activity?  For
>> that matter, where is the clock tree power?  I guess bugs are expected
>> in software, but these are fairly egregious omissions.  Perhaps the
>> tool is accurate for designs without FFs and clocks, but somehow I
>> don't think that's a lot of your target market ;-)
>>
>>> So, is Altera's power estimator that accurate?
>>
>> Yes.  All your deflections aside, I have yet to see anything from you
>> to refute the accuracy of our estimators.
>>
>>> I would give the estimate a 20% bump for what it might actually be in
>>> practice.  Any one unit will be under the estimate.  Only a fast corner
>>> processed part which is shipped to fill the order will come in at the
>>> high end of the estimate.  Since you can not only order "typical" parts,
>>> the additional margin is absolutely necessary.
>>
>> This is a fairly confusing statement.  I'm not sure whether you are
>> talking about dynamic or static power or both.
>>
>> If the power tool a customer is using does not provide "Maximum"
>> silicon characteristics, then yes, they need to bump up the *static
>> power* portion of the estimate.  How much depends on many factors, but
>> I believe Austin's own advice in the past has been 2X from typical to
>> maximum, and in the absence of any other information, 2X is actual not
>> a bad guess.  However, if the tool they are using has maximum or worst-
>> case specs available, then there is no need to guardband -- the
>> estimates already reflect the fastest they will receive.  Of course,
>> the junction temperature must be representative of the hottest
>> conditions the chip will be operating in, since temperature also has a
>> large impact on static power consumtion.
>>
>> When it comes to dynamic power, process variation does not have a
>> strong impact.  When you select Maximum characteristics in the EPE/
>> Quartus, we do adjust dynamic power slightly to account for worst-case
>> dynamic power we see on corner units.  But this variation is small (a
>> few percent).  Metal cap can vary, but it varies independently between
>> metal layers (resulting in an averaging effect).  Faster transistors
>> have very little impact on dynamic power, since the capacitance that
>> needs to be charged is still the same, and the short-circuit current
>> becomes shorter (transistor switches faster) but more intense
>> (transistor pull current faster), resulting in little change in
>> dynamic power for most circuits.
>>
>>> We also
>>> demonstrated how the same design in both chips led to a 15 to 20 degree
>>> C power savings in V4.
>>
>> I am not familiar with the details of this particular design.  Is it
>> the same one you were showing customers that had some of our I/Os
>> shorted to power?  That was a nifty trick.  Very dramatic demo.
>>
>> Regardless, there will always be designs that work well on one chip
>> vs. the other.  We also have demonstrated a number of designs to
>> customers and in various NetSeminars.  Short of making our HDL &
>> designs open to one another for critique, I doubt we will ever get
>> agreement (or complete buy-in from our audiences) on our dynamic power
>> demonstrations.
>>
>> But what is more important in these demos is how good is estimator
>> accuracy?  At end of the day, its not only important which device
>> consumes lower power (I think ours do, you think yours do), but can
>> the customer figure out what that power will be early in the design
>> cycle?  Can they measure it or profile it accurately during design?
>> Can they optimize it with the push of a button in the CAD tools as
>> they can with Quartus?
>>
>> Paul
>> 



Article: 119240
Subject: Re: Power Consumption Estimation for PCI card, any advice?
From: Paul Leventis <paul.leventis@gmail.com>
Date: 15 May 2007 08:51:24 -0700
Links: << >>  << T >>  << A >>
Hi Austin,

I'm sure everyone else in this newsgroup is tired of our endless
jabbing, but I'm not, so here I go...

> Now that you are (?about) to roll out S3, we see it has triple
> oxide, and all of the neat things we did to save power (plus a few more
> of dubious value).

Multiple gate oxides is one of many standard CMOS tricks that can be
used in circuits to improve power.  We intentionally did not use it at
90 nm for a variety of reasons.  At 65 nm, it made sense to use.  So
we used it.  I could spew some crap about "Wow, Xilinx finally figured
out how to use a low-k dielectric at 65 nm".  But I won't -- I'm sure
you had your reasons for not using it at 90 nm.

As for the "dubious value" techniques you refer to, I imagine you're
refering to our "Programable Power Technology" feature?  Being able to
trade-off power for performance on a fine-grained basis in a chip
seems pretty powerful to me, but what do I know.  Traditionally, the
way we (Altera, Xilinx) must design the FPGA is to pick spot on the
performance vs. static power trade-off curve.  By just playing with
the transistor threshold voltage, you have a knob that directly trades
off these two quantities.  At 90 nm, we each picked different points
on the curve -- Stratix II had somewhat higher static power (but lower
dynamic power), but also kicked butt on performance.  At 65 nm, Altera
decided to get off that curve.  Rather than picking between "slow and
low power" and "fast and higher power", we picked both.  Our customers
get increased performance only in those circuits that need it, and
really low static power everywhere else.

And is 1.0V operation the other feature of dubious value?  It seems
that giving our customers yet another big knob -- 1.2V vs. 1.0V -- to
control performance vs. static & dynamic power, we're providing a lot
of value.  I certainly hope we are, since it takes a lot of good
engineering to design a chip to operate well over a larger voltage
range.

> But, it is not better than V5 (at least, one can't make that claim until
> you actually have one to test, and you know what your process variations
> are, and you are ONE YEAR behind, and counting!).

"65 nm" doesn't define a chip.  Its features and performance do.  Does
Virtex-5 have DDR3 support?  Does it have high performance?  Great
power dissipation?  Superior logic density?  No.  So congratulations
-- you got a 65 nm chip out that was marginally better than your 90 nm
offering, before we could get out a 65 nm chip that will be
significantly better than what's out there.

> And, did I mention that you are ONE YEAR behind in 65nm?

Yes, you did.  How's your low-cost 65 nm offering coming along?
Anything out to compete with Cyclone III yet?  Actually, have anything
that comes close to Cyclone II in performance, power or cost yet?

> I apologize to the newsgroup, but even when faced with the facts, I hope
> it is not lost on anyone how the story on S2 changed suddenly from
> "great low power, excellent estimator" to "it is twice as bad as the S3,
> because it doesn't have triple oxide, etc."

First, you are trying to apply logic to marketing taglines -- often a
fruitless exercise.  Second, your logic abilities appear to be
severely underdeveloped.

A) "Stratix II has lower total power than Virtex-4.  Stratix II has
great power estimation."
B) "Stratix III is a kick-ass device.  Our new power features reduce
static power by over 50% from Stratix II at equivalent densities,
while improving performance by 25%."

How exactly does statement B) in any way modify or invalidate any
aspect of statement A)?

Remember, you moved from 90nm to 65nm, dropped the voltage, and yet
increased the static power relative to Virtex-4.  I'm sure you wish
you could be claiming Virtex-5 had 50% the static power of Virtex-4...

> Oh, and the constant boring claims that our estimator is somehow flawed,
> and theirs is so much better.  Better at what?

Better at predicting power; that's what we design ours for at least.

Have you tried out the FF example I have suggested twice?  Any answer
yet on why you don't count clock power anywhere?  My theory: the XPE
is as much a sales tool as an engineering tool.  If you can pretend
your chip doesn't have clock power, then customers will think your
chips have lower power than competitive offerings, and might just buy
Brand X as a result.

Cheers,

Paul Leventis
Altera Corp.


Article: 119241
Subject: Xilinx SD-RAM-Controller (Xilinx EDK 8.2)--problems with xil_printf reading from memory
From: rmeiche <rmeiche@gmx.de>
Date: 15 May 2007 09:16:10 -0700
Links: << >>  << T >>  << A >>
Hello,
I've some problems with reading from my sd-ram. On my FPGA is a Xilinx
Virtex 2 XC2V1000 chip and I want to use the ram for greater software-
applications. The first problem was, that the fpga has only 2 pins for
the SD-RAM DataMask but this was solved with a little glue logic. Now
if I write a little test-application into the RAM  (this "application"
only instances several pointer (for 8,16 an 32 Bit writing) assigning
them different values) the xil_printf function don't display the value
or the address of the pointer.
For example if it is defined like this: xil_printf("Value of pointer:
%d at address: %08X\r\n", *pointer, pointer);
I get the following output on my Terminal: Value of pointer:
address:
If I check the addresses with the debugger then it points out that the
software only writes 32Bit or 8Bit NOT 16Bit words.
But if I write 8,16 or 32Bit words with the debugger and then read
them everything functions very well.
I tried many things but didn't find a solution.
It would be nice if anyone could help me..

Thanks


Article: 119242
Subject: Re: Xilinx ISE 9.1 Simulator does not work with glibc 2.5
From: Thomas Feller <tmueller@rbg.informatik.tu-darmstadt.de>
Date: Tue, 15 May 2007 18:19:25 +0200
Links: << >>  << T >>  << A >>
Colin Paul Gloster wrote:
> Hello,
> 
[..]
> 
> I would suggest that if you are not using a compatible operating
> system for software you are trying to run, that you try using a
> compatible operating system. If you do not want to replace Gentoo, you
> could emulate a different operating system in something such as QEMU (
> WWW.QEMU.org ).
> 
> It is quite possible that the incompatibility you suffer is to do with
> GLibC 2.5 and that not all of ISE 9.1 is incompatible. I do not
> know. Do you have a reason to particularly suspect GLibC 2.5?
> 
> If, and this is a very big if, the only thing you need to add is X
> Windows compiled in a manner which is not incompatible, then you could
> compile X Windows with a suitable version of GLibC with crosstool (
> WWW.Kegel.com/crosstool/
> ) which is a very convenient tool which I have used when I basically
> needed to ridiculously run third party GLibC 2.3.x code with third
> party GLibC 2.2.y code on the same machine. Unfortunately for you, it
> might not be so simple as the third party closed source program from
> Xilinx which is calling X Windows quite possibly requires the same
> version of GLibC as what it requires X Windows to use.

I thought of the crosstool version myself, but I was searching for a
much simpler and smaller (in terms of filesize) version as I'm using a
Laptop here. As you already know space is limited on those tiny devices
so the Qemu solution is not the way I wanted to go.

I'm quite shure that it is glibc as someone on this list had stated,
that he succesfully got it working on a gentoo with glibc2.4.
It might not be the glibc but any Library linked against it, especially
the one needed for displaying the Simulation window, which is the part
that isn't working.

Thanks for your answer
	Thomas

Article: 119243
Subject: Using dynamic reconfiguration ports of DCMs on Virtex 4
From: emotuk@gmail.com
Date: 15 May 2007 09:24:37 -0700
Links: << >>  << T >>  << A >>
Hello,

I've been reading the posts in this forum for a long time. This is my
first time asking a question here. I'm working on possible dynamic
reconfiguration of DCMs to change the PHASE_SHIFT on the fly. On the
Virtex 4 user guide it says that by using the DRP port, the
PHASE_SHIFT value can be changed when the PHASE_SHIFT mode is either
fixed, variable or direct. However, in the configuration documentation
it only mentions a way to change the PHASE_SHIFT value in direct mode.
Does anybody know if it is possible to change the PHASE_SHIFT value by
DPR in either fixed or variable modes?


Article: 119244
Subject: Re: Xilinx SD-RAM-Controller (Xilinx EDK 8.2)--problems with xil_printf reading from memory
From: Alan Nishioka <alan@nishioka.com>
Date: 15 May 2007 09:47:45 -0700
Links: << >>  << T >>  << A >>
On May 15, 9:16 am, rmeiche <rmei...@gmx.de> wrote:
> I've some problems with reading from my sd-ram. On my FPGA is a Xilinx
> Virtex 2 XC2V1000 chip and I want to use the ram for greater software-
> applications. The first problem was, that the fpga has only 2 pins for
> the SD-RAM DataMask but this was solved with a little glue logic.

You have a big problem.  You need all the data mask pins so you can
write only certain bytes in a word without changing the other bytes in
the word.


> If I check the addresses with the debugger then it points out that the
> software only writes 32Bit or 8Bit NOT 16Bit words.
> But if I write 8,16 or 32Bit words with the debugger and then read
> them everything functions very well.

I am guessing the debugger is doing a read-modify-write of the entire
word.  You could make the hardware also do this, but that would be
difficult.

You could also write software that only accesses 32 bit words.  I
think microblaze (You didn't say what processor) only accesses 32 bit
words.

Alan Nishioka


Article: 119245
Subject: ise project navigator can't dereference edk pcores from XilinxProcessorIPLib
From: "L. Schreiber" <l.s.rockfan@web.de>
Date: Tue, 15 May 2007 19:00:05 +0200
Links: << >>  << T >>  << A >>
First of all. Thanks for the bus macro advise.


The second problem still remains. After creating a system with base 
system builder wizard from EDK's XPS and generating its netlist(s), I 
wanted to join the vhdl-files from the hdl-directory to a new ISE 
project. Unfortunatelly ISE doesn't know anything about the imported 
library modules that the "system" modules refere to. These library 
modules can be found inside a subdirectory of the EDK installation 
directory (.../edk/hw/XilinxProcessorIPLib/pcores).

How can ISE be made known, where it should look up those "mysterious" 
;-) unknown vhdl-modules?

Can I set something like a PATH variable for such librarys?

I'm working under ISE and EDK version 7.

thx & bye

Article: 119246
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Jonathan Bromley <jonathan.bromley@MYCOMPANY.com>
Date: Tue, 15 May 2007 18:17:05 +0100
Links: << >>  << T >>  << A >>
On Tue, 15 May 2007 15:49:20 +0200, Sean 
Durkin <news_may07@durkin.de> wrote:

>This is what Micron is saying on the subject on their website
>(http://www.micron.com/support/designsupport/faq/ddr2):
>
>"Q: Will the device run at a slow clock (well under the slowest data
>sheet speed)?
>A:  For a READ operation, the DRAM edge-aligns the strobe(s) with the
>data. Most controllers sense the strobe to determine where the data
>window is positioned. This fine strobe/data alignment requires that each
>DRAM have an internal DLL. The DLL is tuned to operate for a finite
>frequency range, which is identified in each DRAM data sheet. Running
>the DRAM outside these specified limits may cause the DLL to become
>unpredictable.
>
>The DRAM is tested to operate within the data sheet limits. Micron does
>not suggest or guarantee DRAM operation outside these predefined limits."

Sounds like it might be possible to run the DDR2 controller
at (let's say) 50MHz, then put a rather thin wrapper around it
clocked at (let's say) 200MHz that could oversample the read
data and therefore guarantee to capture read data at a time
when it's known to be stable.  After all, there's no point in
trying to test *timing* of the DDR2 controller in the FPGA
prototype; it's only *functionality* that's being validated.
So an oversampled wrapper that made the timing irrelevant
might be useful.

Note that I'm NOT proposing that the SDRAM itself be clocked
faster than the FPGA.  Obviously you want the SDRAM clock to
be the same as the main clock for the DDR2 controller logic,
to ensure that all the pipelining and so forth works the 
same way as in the real thing.  I'm just talking about using
an oversampled clock to interpolate timings around the SDRAM
pins, so that the rest of the controller is immune to 
data skew weirdnesses introduced by the out-of-spec 
slow clock.

This might turn out to be a bad idea - someone needs to work
all the consequences through.

Tricky problem.
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 119247
Subject: Re: First MicroBlaze demo design for Spartan-3A Starterkit
From: Ben Popoola <ben.popoola@REMOVE.recontech.co.uk>
Date: Tue, 15 May 2007 18:09:13 GMT
Links: << >>  << T >>  << A >>
Antti wrote:
> On 9 Mai, 13:24, Brian Drummond <brian_drumm...@btconnect.com> wrote:
>> On Tue, 8 May 2007 08:37:46 -0700, "John_H" <newsgr...@johnhandwork.com>
>> wrote:
>>
>>> "Brian Drummond" <brian_drumm...@btconnect.com> wrote in message
>>> news:dot043d1oc23r5mjr4uco4j24h5n7nnqn5@4ax.com...
>>>> order, asking only a mere $75 for shipping, on what, a 1kg product?
>>>> We'll see how long it takes...
>>>> Hey Xilinx, I'd suggest there's a little room for improvement here.
>>>> - Brian
>>> If you sign up for X-Fest in Manchester, you might be offered a seminar
>>> discount for development boards such as the Spartan-3A.  Heck, the X-Fest I
>>> attended even gave one away.  There are a couple X-Fests in the UK,
>>> Manchester is just the first of the two at the end of May.
>> I wonder if you actually have to attend the X-Fest, or would signing up
>> qualify? Manchester would be about a 22 hour round trip, or about 10
>> hours if I fly.
>>
>> An expensive way to save $26 + import duty + shipping!
>>
>> - Brian
> 
> you only save 26, duty and shipping you have to pay anyway.
> but thats not the only special, so its possible to save more (if you
> spend more)
> 
> Antti
> 
> 
> 
> 
> 
I  have had my kit since early March and I am trying to keep a blog of 
the work I am doing here: http://bpopoola.blogspot.com/
Ben

Article: 119248
Subject: Re: coregen -> simulation error in modelsim
From: Newman <newman5382@yahoo.com>
Date: 15 May 2007 11:30:49 -0700
Links: << >>  << T >>  << A >>
On May 15, 10:14 am, Brian Drummond <brian_drumm...@btconnect.com>
wrote:
> On 15 May 2007 04:06:17 -0700, kislo <kisl...@student.sdu.dk> wrote:
>
>
>
>
>
> >When i try to simulate a coregen generated single port ram, i get a
> >error from modelsim :
>
> ># -- Loading package blkmemsp_pkg_v6_2
> ># -- Loading entity blkmemsp_v6_2
> ># ** Error: ram.vhd(112): Internal error: ../../../src/vcom/
> >genexpr.c(5483)
> ># ** Error: ram.vhd(112): VHDL Compiler exiting
> >from google search i found a guy having the same problem with another
> >coregen component:
> >http://www.mikrocontroller.net/topic/68567
> >he says:
>
> >"jedenfalls war das problem, dass die generics nur im mapping
> >aufgef=FChrt
> >waren und nicht im deklarationsteil der architecture des von coregen
> >generierten files. das sollte, nein das muss man von hand =E4ndern und
> >alles ist gut :o)"
>
> >what is it exatly i am supposed to do to get it to work ?
>
> Not exactly but approximately...
>
> "In every case the problem was, that the generics were only <expressed?>
> in the mapping, and not in the declaration part of the architecture in
> the files generated by Coregen. That should be, no must be, altered by
> hand, and all is well"
>
> So look for missing generics in the Coregen wrapper files, as a starting
> point. If there are discrepancies between them and the mapping
> (component instantiation?), fix by hand.
>
> - Brian- Hide quoted text -
>
> - Show quoted text -

Check out Xilinx Answer Record # 24819 for more information.
http://www.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=3D1&iCountryID=
=3D1&getPagePath=3D24819

-Newman


Article: 119249
Subject: LF VHDL to FSM bubble diagram translator
From: "MM" <mbmsv@yahoo.com>
Date: Tue, 15 May 2007 14:56:38 -0400
Links: << >>  << T >>  << A >>
Hi all,

I was wondering if there is such a thing available? I found a few free and 
commercial tools to edit/create  bubble diagrams, which would then generate 
VHDL code, but I would like to be able to generate a FSM diagram from the 
code...

Thanks,
/Mikhail 





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search