Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 54225

Article: 54225
Subject: Re: Xilinx announces 90nm sampling today!
From: eternal_nan@yahoo.com (Ljubisa Bajic)
Date: 4 Apr 2003 19:27:42 -0800
Links: << >>  << T >>  << A >>
Hello Peter, 

First of all, I would like to say that my intention was not to insult
Austin, and if that was the effect of my posting, I apologize (and you
can pass my apology to him, or forward him this e-mail). After reading
my usenet posting again, I realized it does seem a little bit more
'poisonous' than I intended. Sorry. I have seen enough of Austin's
work and opinions to develop respect for his technical capabilities. I
also have the utmost respect for Xilinx as a company, and have chosen
Xilinx FPGAs whenever I was faced with a choice. I will probably do it
again :)

Now to the subject matter. I do not dispute the facts that you listed
bellow, however I must point out that they are not the same 'facts'
that Austin listed yesterday.

Specifically (quotes from Austin's postings are in blue, my comments
in black):

1. ASICs are all but dead except for those really big jobs that can
afford the
$80M++ price tag to develop them.  Or those jobs where low current is
required
(ie cell-phones).  I have yet to hear of an ASIC that cost 80M to
develop, maybe high-end CPUs and other large full-custom jobs approach
this price tag, but definitely not any ASIC I can think of. Do you
disagree with this point ?

2. Even televisions don't sell enough to afford some of the new ASIC
pricetags.
Think about it.  An "appliance" doesn't sell in large enough volume to
have
its own ASIC. Having just taped out a large DTV chip in 0.13um, I can
assure you that almost all of the 'core' digital functionality of
modern television sets is implemented using standard cell ASICs, you
will see some CPLDs used for 'glue', but FPGAs are relatively rare and
almost exclusively used to patch up 'bugs' in the ASICs.

3. So 'cheap' ASICs are stuck at 180nm (and above).  How do you gauge
the cheapness of an ASIC? The FPGA vs. ASIC economic argument is
almost the same today as it was 10 years ago. It all comes down to
volume. I am aware of a large number of applications where volume is
large enough for cutting edge feature size ASICs to pay off. Austin's
above statement is, in my view, at best inclomplete, at worst plane
wrong.

And finally, from another posting in the same thread:

4. Still have a job, too.  How many positions are open for ASIC
designers?  Are you
one of the very lucky, very few, still employed? Talk about poison. It
sounds almost as if Austin is deriving joy out of pointing out that
lots of our coleagues out there are jobless. I think that is just not
nice, and it is mainly seeing this posting that motivated me to write
my 'poisonous' posting. Just for the record, I know many people
(ex-coworkers or friends) in all facets of electronics that have lost
their jobs recently. I do not think ASIC designers are any more
affected than anyone else (except, perhaps experienced analog ic
designers). In either case, as you said, there is no reason to insult
each other.

As far as your observation about the reduced number of ASIC starts,
wouldn't you say it is plausable that many companies are implementing
anything they can in FPGAs due to the economic uncertainty and unclear
market outlook? I would expect a large number of these will be
converted to ASICs when the outlook becomes clearer and at that point
Xilinx and Altera will lose some sales. Of course, I could be wrong.

Anyhow, it is clear to me that both FPGAs and standard cell ASICs have
well established niches and applications. I think both are necessary
and neither will be going away any time soon.

Anyhow, I hope this clears up my point of view and, once again, I
apologize for any perceived insult.

Best Regards,

Ljubisa Bajic


Peter Alfke <peter@xilinx.com> wrote in message news:<3E8E12A3.7CA77738@xilinx.com>...
> Wow, why all this poison ?
> There can be no debate about the well-known fact that the number of new
> ASIC designs is decreasing, while FPGA starts are increasing. And this
> has to do with the very high NRE cost for state-of-the-art ASICs.
> $30,000 per mask, and 30 masks for a state-of-the-art process. Including
> extreme levels of verification, this ends up as several million dollars.
> Plus risk, plus inflexibility...
> So if you want the highest ASIC performance and the lowest chip price,
> you have to pay a lot of money up-front, even after you have completed
> your design and done all the verification. We at Xilinx know that,
> because that's what we have to do and pay to get our FPGAs into
> production. Remember, from our point of you, we make custom chips !
> Our advantage is that we can amortize the NRE over millions of devices.
> 
> The trend is clear, and the wind is in favor of the FPGA.
> That does not mean that ASICs will die immediately. But the argument in
> favor of ASICs gets more and more difficult to make.
> ASICs are for extremely high volume, or extreme speed, or extremely low power.
> More and more of the other designs will be implemented in FPGAs, which
> are getting bigger, faster, and cheaper every day. 
> We obviously like this trend, others may dislike it, but let's not
> insult each other.
> Facts are facts.
> 
> Peter Alfke, Xilinx Applications
> ===================
> Ljubisa Bajic wrote:
> > 
> > I absolutely agree with Rudi.
> > As someone who has done high-speed board design, fpga based logic design
> > and full custom ic design, I must say that most of what you wrote in the above
> > article strikes me as bordering on nonsense.
> > I have enjoyed your tutorials on signal integrity and find most
> > of your postings in this group very usefull, so I am amazed at how
> > willing you are to abandon reason and truth in favour of Xilinx marketing.
> > I hope they pay you REALLY well ...
> > 
> > Ljubisa Bajic,
> > VLSI Design Engineer,
> > Oak Technology, Teralogic Group
> > 
> > --------------My opinions do not reflect those of my employer.-------------
> > 
> > Austin Lesea <Austin.Lesea@xilinx.com> wrote in message news:<3E887532.31FE90B4@xilinx.com>...
> > > Nicholas,
> > >
> > > The original question was "why would anyone spend $4,000."
> > >
> > > Good question.  No one does.  Well almost no one.  I suppose the 'monster'
> > > FPGAs (like the 2V8000, or the 2VP100) will always command a premium until
> > > they, too, are mainstream - just a question of demand).
> > >
> > > 1M+ gates up until now has certainly been much less than $4,000 (even in small
> > > quantities).
> > >
> > > Now we are talking about even less money for 1M+ gates in 90 nm.
> > >
> > > ASICs are all but dead except for those really big jobs that can afford the
> > > $80M++ price tag to develop them.  Or those jobs where low current is required
> > > (ie cell-phones).
> > >
> > > Even televisions don't sell enough to afford some of the new ASIC pricetags.
> > > Think about it.  An "appliance" doesn't sell in large enough volume to have
> > > its own ASIC.
> > >
> > > The recent EETimes article on IP at these geometries was especially telling.
> > > Integration of IP at 130 nm and 90nm is a hightmare......etc. etc. etc.  The
> > > 80M$ figure above was from that article.
> > >
> > > So 'cheap' ASICs are stuck at 180nm (and above).  But with 90nm FPGAs we are
> > > three or more techology steps ahead (.15, .13, .09), and that makes us a
> > > better deal.
> > >
> > > Austin
> > >
> > > "Nicholas C. Weaver" wrote:
> > >
> > > > In article <3E886139.96955371@xilinx.com>,
> > > > Austin Lesea  <Austin.Lesea@xilinx.com> wrote:
> > > > >Really?
> > > > >
> > > > >Have just annouced 90nm shipped samples.
> > > > >
> > > > >http://biz.yahoo.com/prnews/030331/sfm087_1.html
> > > > >
> > > > >so I would suspect that you might want to get in touch with another
> > > > >distributor....
> > > > >
> > > > >Might find 1+ million gates for a whole lot less....
> > > >
> > > > Thats "250,000 quantities at (the end of?) 2004".  :)
> > > >
> > > > --
> > > > Nicholas C. Weaver                                 nweaver@cs.berkeley.edu

Article: 54226
Subject: Re: Xilinx announces 90nm sampling today!
From: ldoolitt@recycle.lbl.gov (Larry Doolittle)
Date: Sat, 5 Apr 2003 03:55:05 +0000 (UTC)
Links: << >>  << T >>  << A >>
On Sat, 5 Apr 2003 14:33:43 +1200, Simon <mischevious1_nz@yahoo.co.nz> wrote:
>"Larry Doolittle" <ldoolitt@recycle.lbl.gov> wrote in message
>> Europeans,
>> Japanese, and Americans probably are close to their limit for how
>> many dollars per year they spend on gadgets.  After India and the
>> Pacific Rim get there too (around 2005), our R&D budgets will
>> necessarily flatten out, and we won't be able to afford the next
>> big push to smaller geometries.
>
>Isn't that what Chinas for ?

If China gets its economic and social act together enough to
add substantially to the world market for semiconductors by 2005,
I will postpone my prediction for the "end of Moore's law" to 2006.
My current guess is that China won't, so my 2005 predicted end holds.

     - Larry

Article: 54227
Subject: Re: Matrix multiply in FPGA
From: "Roger Green" <rgreen@bfsystems.com>
Date: Fri, 4 Apr 2003 21:10:31 -0700
Links: << >>  << T >>  << A >>
I'm currently finishing up a "matrix multiply" co-processor design that
does 16-bit fixed point arrays programmable up to 256 x 256 in size.

Sounds like you're on the right track. The use of fully pipelined MACs
gives you
vector dot products in 216 cycles, plus the latency of the mults and
adds and
data selects between array values. So it sounds like you could do a 6x6
op in around
250 cycles with a single MAC to me without too much sweat.  If you do
the additional
parallel approach and put six MAC units to work on the problem, divide
that by time
by six.

I agree that the data "store and retrieve" functions required for
generic matrix
multiply operations (with no constant DA advantages) tends to be a
bigger problem
than the actual math execution.  In Virtex, with ample block rams
available for row
and column data buffers, this isn't too bad.  Also, the transpose arrays
can be
handled as the array data is loaded into memory, prior to the actual
math operations,
so that piece is essentially free in terms of computational time.

Roger Green
www.bfsystems.colm


"jerry1111" <jerry1111@wp.pl> wrote in message
news:b6f2ol$826$1@atlantis.news.tpi.pl...
> > You didn't say how fast it has to run.
>
> Sorry - I was so absorbed with this problem that I forgot to write it:
>
> 10us with 40MHz clock => 400 cycles would be perfect, but it's almost
> impossible - at least from my current point of view.
> I'll be happy getting any reasonable speed of course, but this timing
gives
> an idea how should it be....
>
> I have to do 6 muls and cumulate them (6 adds) for each element from
result matrix.
> Matrix is 6x6, so 36 elements gives 216 muls and 216 adds....
>
> Now I'm thinking about some sort of parralel operations, but it's not
so
> simple because of storing data in ram. The best would be to store each
row
> from A in separate block, columns from B in another 6 blocks,
multiplying with
> 6 parallel logic pieces and feed results to FIFO. Each row/column is
6x36 bits -
> - maybe it would be better to make some pipelinig...
>
> Now I have 10 sheets of paper with various solutions, but I'd like to
hear
> opinions from 'another point of view'....
>
> Selected device is EP1C6 from Altera.
>
> PS: Sorry for my bad english, but I'm more reading than writing.
>
> --
> jerry
>
> "The day Microsoft makes something that doesn't suck is probably
> the day they start making vacuum cleaners." - Ernst Jan Plugge
>
>



Article: 54228
Subject: Re: Xilinx announces 90nm sampling today!
From: hmurray@suespammers.org (Hal Murray)
Date: Sat, 05 Apr 2003 04:20:15 -0000
Links: << >>  << T >>  << A >>
>This works as long as funding continues to increase.  My point
>(perhaps poorly stated) is that funding can not continue to increase,
>as the world market for electronics becomes saturated.  ...

Is the cost of fab lines following Moore's law?  I suspect so, but
I haven't seen any good data.  If so, then we can predict serious
troubles when the cost of a fab line crosses the GDP.

-- 
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's.  I hate spam.


Article: 54229
Subject: Re: Cyclone power up problem - 'Engineerus Emptor'
From: Eric Smith <eric-no-spam-for-me@brouhaha.com>
Date: 04 Apr 2003 20:38:29 -0800
Links: << >>  << T >>  << A >>
Austin Lesea <Austin.Lesea@xilinx.com> writes:
> Like a bad cold?  As you know, we designed out the power up issues in
> VII and subsequent products, and improved it substantially in 300mm
> SIIE.
> 
> One caution, and I will raise it again: we have built 'zillions of
> Spartan II, IIE now (and IIE is all on 300 mm fab now).
> 
> What that means is that we KNOW what it does, and we KNOW the abs max,
> and abs min numbers over PVT.

Why, then is there no maximum specification for Iccpo in the data sheet?
This has been a major issue for me in trying to use a switching regulator
for Vccint, because the switching regulators basically shut down when their
current limit is exceeded, rather than going into a current-limiting mode.


Article: 54230
Subject: Re: Altera not supplying Leonardo any more
From: Kevin Brace <kev0inbrac1eusen2et@ho3tmail.c4om>
Date: Fri, 04 Apr 2003 23:10:17 -0600
Links: << >>  << T >>  << A >>
Paul,

When I saw Altera start using a new synthesis tool which they bought it
from Verific Design Automation (http://www.verific.com) since Quartus II
2.1, I always wondered when Altera will drop the free license for
LeonardoSpectrum.
I don't mean to start an A vs. X argument here, if you don't like what
Altera did to you, why not switch to Xilinx?
Personally, I am fed up with Altera's backend solution because it has a
fatal flaw of renaming the LUT name, which makes the floorplanner
useless (For example, a LUT called ix_8160 gets renamed by the fitter as
ix_8160~0, and floorplan location assigned for ix_8160 will not apply to
the renamed LUT ix_8160~0.).
If someone knows how I can prevent the fitter from renaming the LUT, I
will like to hear, but so far I have tried a lot of things like turning
off various fitter options, and nothing has worked.
Anyhow, going back to the LeonardoSpectrum license issue, I personally
never like that tool because its buggy and hard to use GUI, but at least
it was able to generate an EDIF netlist.


Kevin Brace (If someone wants to respond to what I wrote, I prefer if
you will do so within the newsgroup.)

Article: 54231
Subject: Re: Xilinx V2.1i Licensing
From: Kevin Brace <kev0inbrac1eusen2et@ho3tmail.c4om>
Date: Fri, 04 Apr 2003 23:16:33 -0600
Links: << >>  << T >>  << A >>
Scott,

If you already have a separate synthesis tool or a schematic tool, you
can use ISE Classic which is a backend only tool (Doesn't come with a
synthesis tool which might be a problem.) that supports most XC4000
series FPGAs.


Kevin Brace (If someone wants to respond to what I wrote, I prefer if
you will do so within the newsgroup.)


Scott wrote:
> 
> Well I'm desiging a 4 bit ALU, I'm using the XC4010XL FPGA, which is fairly
> outdated and is why I needed to use 2.1, because the webpack doesnt support
> that chip.
>

Article: 54232
Subject: Re: Xilinx announces 90nm sampling today!
From: hmurray@suespammers.org (Hal Murray)
Date: Sat, 05 Apr 2003 06:33:04 -0000
Links: << >>  << T >>  << A >>
>  ...  $30,000 per mask, and 30 masks ...

>The trend is clear, and the wind is in favor of the FPGA.
>That does not mean that ASICs will die immediately. But the argument in
>favor of ASICs gets more and more difficult to make.
>ASICs are for extremely high volume, or extreme speed, or extremely low power.
>More and more of the other designs will be implemented in FPGAs, which
>are getting bigger, faster, and cheaper every day. 
>We obviously like this trend, others may dislike it, but let's not
>insult each other.

Are some fab houses keeping their old fab lines so that they can
keep building ASICs without the NRE being such a killer?  Seems
like it might be an interesting nitch market.

-- 
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's.  I hate spam.


Article: 54233
Subject: Re: Xilinx announces 90nm sampling today!
From: russelmann@hotmail.com (Rudolf Usselmann)
Date: 5 Apr 2003 00:58:03 -0800
Links: << >>  << T >>  << A >>
Peter Alfke <peter@xilinx.com> wrote in message news:<3E8E12A3.7CA77738@xilinx.com>...
> Wow, why all this poison ?
> There can be no debate about the well-known fact that the number of new
> ASIC designs is decreasing, while FPGA starts are increasing. And this
> has to do with the very high NRE cost for state-of-the-art ASICs.
> $30,000 per mask, and 30 masks for a state-of-the-art process. Including
> extreme levels of verification, this ends up as several million dollars.
> Plus risk, plus inflexibility...

Peter,

no doubt that FPGAs are great and there are many reasons
to use them. However, lets get out facts straight and compare
apples to apples.

I'n not sure what in your/xilinx opinion a "state-of-the-art
process" is. I for one still concider 0.18 and 0.13u to be
state of the art. Mask cost for them are about $200K and $500K
respectively. I seriously doubt you can compare the latest
and greatest FPGA from xilinx to an ASIC in 0.18u process.

I am puzzled why you would not encourage your customers to
invest the same amount of verification effort for FPGA designs
as for ASIC designs ? I for one can not afford for my customers
to ship my products back because they don't work - regardless
if I use FPGAs or ASICs.

> So if you want the highest ASIC performance and the lowest chip price,
> you have to pay a lot of money up-front, even after you have completed
> your design and done all the verification. We at Xilinx know that,
> because that's what we have to do and pay to get our FPGAs into
> production. Remember, from our point of you, we make custom chips !

From my point of view you make an excellent prototyping vehicle
for custom ASICs and a great solution for designs with very low
quantities.

> Our advantage is that we can amortize the NRE over millions of devices.
> 
> The trend is clear, and the wind is in favor of the FPGA.

Not sure where your wind comes from, looks like there is no
wind over here at all. ;*)

> That does not mean that ASICs will die immediately. But the argument in
> favor of ASICs gets more and more difficult to make.
> ASICs are for extremely high volume, or extreme speed, or extremely low power.
> More and more of the other designs will be implemented in FPGAs, which
> are getting bigger, faster, and cheaper every day. 
> We obviously like this trend, others may dislike it, but let's not
> insult each other.
> Facts are facts.

Well, no insults intended, but it looks to me that some of
us have their facts from marketing brochures ! ;*)

> 
> Peter Alfke, Xilinx Applications
> ===================


Best Regards,
rudi
------------------------------------------------
www.asics.ws   - Solutions for your ASIC needs -
FREE IP Cores  -->   http://www.asics.ws/  <---
-----  ALL SPAM forwarded to: UCE@FTC.GOV  -----

Article: 54234
Subject: Re: Cyclone power up problem - 'Engineerus Emptor'
From: "Martin Schoeberl" <martin.schoeberl@chello.at>
Date: Sat, 05 Apr 2003 09:03:30 GMT
Links: << >>  << T >>  << A >>

"Eric Smith" <eric-no-spam-for-me@brouhaha.com> schrieb im Newsbeitrag
news:qhznn5tv7e.fsf@ruckus.brouhaha.com...
> Austin Lesea <Austin.Lesea@xilinx.com> writes:
> > Like a bad cold?  As you know, we designed out the power up issues in
> > VII and subsequent products, and improved it substantially in 300mm
> > SIIE.
> >
> > One caution, and I will raise it again: we have built 'zillions of
> > Spartan II, IIE now (and IIE is all on 300 mm fab now).
> >
> > What that means is that we KNOW what it does, and we KNOW the abs max,
> > and abs min numbers over PVT.
>
> Why, then is there no maximum specification for Iccpo in the data sheet?
> This has been a major issue for me in trying to use a switching regulator
> for Vccint, because the switching regulators basically shut down when
their
> current limit is exceeded, rather than going into a current-limiting mode.

That's EXACTLY the problem I had with the Cyclone.

>

As you can read in XAP450:

<quote>
POS Current
A maximum limit for ICCPO is not specified in the Spartan-II and Spartan-IIE
data sheets. The upper bound on the size of the surge is determined by the
amount of supply current available to the FPGA (i.e., the effective current
limit). This is true because the FPGA takes on a very low power-to-ground
impedance during the power-on period.

If more supply current is available than what is necessary to satisfy ICCPO
min, the FPGA is likely to consume the excess. In practice, the POS current
at room temperature can be on the order of an Ampere or more. For a
description of POS behavior as related to the amount of supply current
available, see "Effects of Current Limit", page 5.

Beware of over-current protection circuits (e.g., trip/crowbar, foldback and
fuse), since it is possible for these to inadvertently shut down power to
the FPGA in the presence of a large POS current. (For more information, see
""Regulator Selection", page 6.)
</quote>

I read it simple as: 'The FPGA takes what it gets'.

Would be nice if Altera could specify the sturt-up current of their devices.
There is an update of the Cyclone data sheet at Alteras web site, but no
information on this issue. And AN257 is still missing.

Martin Schoeberl





Article: 54235
Subject: gated clock
From: rathanon99@yahoo.com (ron)
Date: 5 Apr 2003 01:34:10 -0800
Links: << >>  << T >>  << A >>
I'm using Virtex FPGA and I want to implement a gated clock. How do I go about this?

Article: 54236
Subject: Question about Xilinx Classes
From: "Kyle Davis" <kyledavis@nowhere.com>
Date: Sat, 05 Apr 2003 10:14:59 GMT
Links: << >>  << T >>  << A >>
I am thinking of enrolling myself at Introduction to Verilog at Xilinx. I
really have to know Verilog in order to be get a better chance of get a job
and better grade in my Advanced Digital Design class. Has anyone ever
attended their class? Are Xilinx instructors good? I have to pay all the
tutions from my own pocket so I would like to know weather their teachers
quality is good or not.
Thanks!



Article: 54237
Subject: Re: Matrix multiply in FPGA
From: "jerry1111" <jerry1111@wp.pl>
Date: Sat, 5 Apr 2003 12:33:45 +0200
Links: << >>  << T >>  << A >>
> You don't need to give yourself that much performance margin.  While the
> Cyclone timing models are still preliminary, core timing tends to be pretty

Now it's still in software stage - so I'm taking these margins just for
safety. If they are not needed - OK

> BTW, which version of Quartus did you use, and which speed grade, for that
> 63 Mhz result?


Q2.2SP1 - I made some compilations of standard_32 example. AFAIR it was about 60MHz
(62 or 63) for speed grade -8, about 70MHz for -7, and don't remember what was for -6.
Is it wrong? Should it be slower? 


-- 
jerry

"The day Microsoft makes something that doesn't suck is probably
the day they start making vacuum cleaners." - Ernst Jan Plugge 



Article: 54238
Subject: Re: anyone has doc on Viewsim commands? Thanks!
From: "Egbert Molenkamp" <molenkam_no_spam@cs.utwente.nl>
Date: Sat, 5 Apr 2003 16:27:55 +0200
Links: << >>  << T >>  << A >>

"Kang Liat Chuan" <kanglc@starhub.net.sg> schreef in bericht
news:3e8e3d27$1@news.starhub.net.sg...
> Hi group,
>
> I inherited a design which was simulated using Viewsim commands (I believe
> it is Foundation 1.5/3.1?). I am converting the "testbench" into full
VHDL.
> At first, I tried to convert to Modelsim do file, but the force statement
is
> just too much. So I am guessing what those Viewsim commands like wfm,
smode,
> bc etc means!

Viewsim commands are used in the Viewlogic tooling (powerview/Workview)
Maybe the following link helps:
http://www.informatik.tu-cottbus.de/~mwaldman/info/viewsim.html

Egbert Molenkamp



Article: 54239
Subject: Re: Matrix multiply in FPGA
From: "Paul Leventis \(at home\)" <paul.leventis@utoronto.ca>
Date: Sat, 05 Apr 2003 14:56:00 GMT
Links: << >>  << T >>  << A >>
> > BTW, which version of Quartus did you use, and which speed grade, for
that
> > 63 Mhz result?
>
> Q2.2SP1 - I made some compilations of standard_32 example. AFAIR it was
about 60MHz
> (62 or 63) for speed grade -8, about 70MHz for -7, and don't remember what
was for -6.
> Is it wrong? Should it be slower?

I was curious about the speed grade, as you always have the fall-back plan
of jumping to a higher speed-grade if you end up missing performance by a
tad... but that would only be the case if you were in a -7 or -8, of
course.

Regards,

Paul



Article: 54240
Subject: Re: Xilinx announces 90nm sampling today!
From: russelmann@hotmail.com (Rudolf Usselmann)
Date: 5 Apr 2003 07:45:02 -0800
Links: << >>  << T >>  << A >>
hmurray@suespammers.org (Hal Murray) wrote in message news:<v8su50708gnba9@corp.supernews.com>...
> >  ...  $30,000 per mask, and 30 masks ...
>  
> >The trend is clear, and the wind is in favor of the FPGA.
> >That does not mean that ASICs will die immediately. But the argument in
> >favor of ASICs gets more and more difficult to make.
> >ASICs are for extremely high volume, or extreme speed, or extremely low power.
> >More and more of the other designs will be implemented in FPGAs, which
> >are getting bigger, faster, and cheaper every day. 
> >We obviously like this trend, others may dislike it, but let's not
> >insult each other.
> 
> Are some fab houses keeping their old fab lines so that they can
> keep building ASICs without the NRE being such a killer?  Seems
> like it might be an interesting nitch market.


From what I see, yes, definitely. I have even heard of
new fabs coming on line in China and Korea, that do 0.5
and 0.35u only. Pricing is supposedly dirt cheap, it's
the reliability they have problems with - so far. 

There are a lot of "gizmo's" being produced in China
these days with die on PCB technology. These are very
simple circuits, like blinking lights for example. My
guess is they use very old technology for that but can
do it really, really cheap in very high quantities (check
out your Christmas light control box ;*)

Cheers,
rudi
------------------------------------------------
www.asics.ws   - Solutions for your ASIC needs -
FREE IP Cores  -->   http://www.asics.ws/  <---
-----  ALL SPAM forwarded to: UCE@FTC.GOV  -----

Article: 54241
Subject: Re: Xilinx announces 90nm sampling today!
From: nweaver@ribbit.CS.Berkeley.EDU (Nicholas C. Weaver)
Date: Sat, 5 Apr 2003 16:24:34 +0000 (UTC)
Links: << >>  << T >>  << A >>
In article <d44097f5.0304050745.23e9c0c6@posting.google.com>,
Rudolf Usselmann <russelmann@hotmail.com> wrote:
>There are a lot of "gizmo's" being produced in China
>these days with die on PCB technology. These are very
>simple circuits, like blinking lights for example. My
>guess is they use very old technology for that but can
>do it really, really cheap in very high quantities (check
>out your Christmas light control box ;*)

Chip on board has been quite common for years in some high volume
apps.  When those virtual pets/tomogochi thingies first came out, I
picked one up to "dissect" [1].  The chip was about 3mm on a side or
so, chip on board under a blob of expoxy.  The coolest part was the
LCD connection, which were two zebra-connectors.


[1] well, actually, it was vivisection, we kept it on as long as
possible.  :)
-- 
Nicholas C. Weaver                                 nweaver@cs.berkeley.edu

Article: 54242
Subject: Re: Xilinx Divider Core
From: Duane Clark <junkmail@junkmail.com>
Date: Sat, 05 Apr 2003 09:58:59 -0800
Links: << >>  << T >>  << A >>
Rajeev wrote:
> I looked at this last year.  I was more or less satisfied with the
> core
> although I had less bits.  One problem I had at 4 clocks per bit was
> not
> having access to the internal state of the 4-clock counter -- the
> delay depends on which tick the operands are presented.  Perhaps this
> is a problem in simulation only (?)  You may wish to see the posts
> dated 2002-09-12 at:
> 
> http://groups.google.com/groups?hl=en&lr=&ie=ISO-8859-1&safe=off&q=Xilinx+LogicCore+Pipelined+Divider&btnG=Google+Search
> 
> Also as I recall, the Xilinx Simulink blockset doesn't include a block
> for
> their divider core, while the Altera DSP Builder does.  (If this has
> changed, do let me know.)
> 

An integer divider is not all that difficult. Here is a not particularly 
optimized one I did awhile back. It is not pipelined (a division has to 
complete before starting another one).

-- Perform integer division. Divides the constant PRF_NUMERATOR by
-- the input PRFLD value, to get a PRF in pulses per half second. The
-- code performs binary long division, started by the START signal
-- with the result appearing on the output a variable number of
-- clocks later; always within 16 clocks. It gives half the PRF
-- because this result will be effectively multiplied by two when
-- it is passed through the ROMs in UART which are used to perform
-- BCD conversion.
--
library IEEE;
use IEEE.std_logic_1164.all;
use IEEE.std_logic_unsigned.all;
use work.prf_Pkg.all ;

entity DIVIDE is
    port (
       RESET       : in std_logic;
       CLK         : in std_logic;
       START       : in std_logic;
       DIN         : in std_logic_vector(13 downto 0);
       DOUT        : out std_logic_vector(15 downto 0);
       LSB         : out std_logic
    );
end DIVIDE;

architecture synth of DIVIDE is
    signal PART_DIFF     : std_logic_vector(14 downto 0);
    signal DIFFER        : std_logic_vector(14 downto 0);
    signal DENOM         : std_logic_vector(13 downto 0);
    signal NUMER         : std_logic_vector(6 downto 0);
    signal SCALE         : std_logic_vector(3 downto 0);
    signal DIV_CNT       : std_logic_vector(3 downto 0);
    signal RESULT        : std_logic_vector(15 downto 0);
begin

    -- we need to find the most significant '1' in the divisor
    din_shift_p: process (CLK, RESET)
    begin
       if RESET = '1' then
          DENOM <= (others => '1');
          SCALE <= (others => '0');
       elsif rising_edge(CLK) then
          if DIN(13) = '1' then
             DENOM <= DIN;
             SCALE <= X"0";
          elsif DIN(12) = '1' then
             DENOM <= DIN(12 downto 0) & "0";
             SCALE <= X"1";
          elsif DIN(11) = '1' then
             DENOM <= DIN(11 downto 0) & "00";
             SCALE <= X"2";
          elsif DIN(10) = '1' then
             DENOM <= DIN(10 downto 0) & "000";
             SCALE <= X"3";
          else
             DENOM <= DIN(9 downto 0) & "0000";
             SCALE <= X"4";
          end if;
       end if;
    end process din_shift_p;

    div_cnt_p: process (CLK, RESET)
    begin
       if RESET = '1' then
          DIV_CNT <= (others => '0');
          DOUT <= (others => '0');
       elsif rising_edge(CLK) then
          if START = '1' then
             DIV_CNT <= SCALE + 9;
          elsif DIV_CNT /= 0 then
             DIV_CNT <= DIV_CNT - 1;
          end if;
          if DIV_CNT = 1 then
             DOUT <= RESULT;
          end if;
       end if;
    end process div_cnt_p;

    DIFFER <= PART_DIFF - DENOM;

    divide_p: process(CLK, RESET)
    begin
       if RESET = '1' then
          PART_DIFF <= (others => '0');
          NUMER <= (others => '0');
          RESULT <= (others => '0');
          LSB <= '0';
       elsif rising_edge(CLK) then
          if START = '1' then
             PART_DIFF <= '0' & PRF_NUMERATOR(20 downto 7);
             NUMER <= PRF_NUMERATOR(6 downto 0);
             RESULT <= (others => '0');
          elsif DIV_CNT /= 0 then
             if PART_DIFF > DENOM then
                PART_DIFF <= DIFFER(13 downto 0) & NUMER(6);
                RESULT <= RESULT(14 downto 0) & '1';
                LSB <= '1';
             else
                -- the else is only possible of DIFFER(14) is '0'
                PART_DIFF <= PART_DIFF(13 downto 0) & NUMER(6);
                RESULT <= RESULT(14 downto 0) & '0';
                LSB <= '0';
             end if;
             NUMER <= NUMER(5 downto 0) & '0';
          end if;
       end if;
    end process divide_p;

end architecture synth;


-- 
My real email is akamail.com@dclark (or something like that).


Article: 54243
Subject: Re: 2.5V switching regulator for Spartan 2
From: John Larkin <John.Larkin>
Date: Sat, 05 Apr 2003 10:08:36 -0800
Links: << >>  << T >>  << A >>
On 02 Apr 2003 18:12:04 -0800, Eric Smith
<eric-no-spam-for-me@brouhaha.com> wrote:

>I'm thinking about using a Linear Technology LTC3406B synchronous buck
>regulator for the 2.5V core Vdd for an XC2S150.  Has anyone else used
>this?  It's rated for 600 mA, so it should be able to handle the 500 mA
>required current at power-on (and my application will need less than
>that when operating), but I'm concerned about whether the ramp will be
>too fast, too slow, non-monotonic, or otherwise make the FPGA unhappy.
>
>It's not too expensive, and requires few external components.  Since it
>operates at 1.6 MHz, it can use a very small inductor.
>
>Thanks,
>Eric


Eric,

why not just use a linear LDO (LM1117 or whatever) from +5 or +3.3?
That's a lot cheaper and simpler. 

John




Article: 54244
Subject: Re: Help implementing BlockRAM on Spartan-II
From: Duane Clark <junkmail@junkmail.com>
Date: Sat, 05 Apr 2003 10:12:59 -0800
Links: << >>  << T >>  << A >>
Stephen du Toit wrote:
> Duane Clark <junkmail@junkmail.com> wrote in message news:<b6fohe027ie@enews4.newsguy.com>...
> 
>>
>>These strings are used for simulation only. You also need these 
>>attributes in the declarations area of the architecture:
>>    -- These attributes are used for synthesis
>>    attribute INIT_00 : string;
>>    attribute INIT_01 : string;
>>
>>    attribute INIT_00 of rom_d: label is 
>>"000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F";
>>    attribute INIT_01 of rom_d: label is 
>>"000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F";
>>
>>Yes, it is annoying that you need the string twice in slightly different 
>>forms.
> 
> 
> Thanks very much for that. I am reading up about attribute
> declarations and specifications

Take a look at:
http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=10695


-- 
My real email is akamail.com@dclark (or something like that).


Article: 54245
Subject: Re: FFT 256pt on Spartan
From: russelmann@hotmail.com (Rudolf Usselmann)
Date: 5 Apr 2003 10:17:56 -0800
Links: << >>  << T >>  << A >>
furia1024@wp.pl (Jerzy) wrote in message news:<dc3feced.0304020503.74b33828@posting.google.com>...
> Hi
> I'd like to know if is there any possibility to make usable IPCore
> from Xilinx FFT 256pt, for Spartan?
> Spartan IIe has similiar resources as virtex.
> Or could You give me advice where I can find core this class?
> 
> Greatings
> 
> Jerzy Gbur

Check out the FFT IP core available at OpenCores
(www.opencores.org). It's free and worth a try !

Cheers !
rudi
------------------------------------------------
www.asics.ws   - Solutions for your ASIC needs -
FREE IP Cores  -->   http://www.asics.ws/  <---
-----  ALL SPAM forwarded to: UCE@FTC.GOV  -----

Article: 54246
Subject: Re: Question about Xilinx Classes
From: "Bill Turnip" <BTurnip@acm.org>
Date: Sat, 05 Apr 2003 19:42:49 GMT
Links: << >>  << T >>  << A >>
Kyle -

    The instructors for the Xilinx FPGA classes are excellent.  In addition,
there are usually other Xilinx experts sitting in the back of the class to
answer detailed questions the instructor might not be able to answer.  But
to plunk down the cash to learn Verilog at Xilinx?  Geez, not the most cost
effective solution IMHO; I assume that for 8 hours of instruction you pay 3,
4, or 5 hundred dollars - I don't know the price.  If you are at the
introductory level, get James Lees' book and do every problem sitting in
front a PC with a simulator.  I am sure you will get much more out of this
exercise if you are disciplined, its much cheaper, and the absorption rate
of the material will be much higher than it would be in an 8 hour cerebral
overload data dump.  Better yet, as it sounds like you may be in the Bay
area, take his class.

    Don't get me wrong, Xilinx instructors are great teachers,
knowledgeable, and approachable.

IMHO,
Bill

"Kyle Davis" <kyledavis@nowhere.com> wrote in message
news:D0yja.1167$EG5.71262260@newssvr21.news.prodigy.com...
> I am thinking of enrolling myself at Introduction to Verilog at Xilinx. I
> really have to know Verilog in order to be get a better chance of get a
job
> and better grade in my Advanced Digital Design class. Has anyone ever
> attended their class? Are Xilinx instructors good? I have to pay all the
> tutions from my own pocket so I would like to know weather their teachers
> quality is good or not.
> Thanks!
>
>



Article: 54247
Subject: help with DLL problem in Spartan2E
From: Theron Hicks <hicksthe@egr.msu.edu>
Date: Sat, 05 Apr 2003 15:28:34 -0500
Links: << >>  << T >>  << A >>
Hi,
    I appear to have a problem with a dll in a Spartan2E.  I am
attempting to use a 102.4MHz clock to get timing results at a
granularity of a 409.6MHz clock.  To do so I am using a lowspeed DLL to
double the clock and a high speed DLL to generate a pair of 204.8MHz
clocks with a 180 degree delay between them.  The system uses two
counters with each counter being clocked 180 degrees out of phase with
the other at 204.8 MHz.  The resulting counts are then added to get an
effective 409.6MHz clock resolution.

The code below is a fragment of the full system with only the DLL
related signals being shown.  Does anyone see any problems with the code
implementation, etc?



    u2: clkdllhf PORT MAP (
    clkin=>buf_clk2x,
    clkfb=>clk0_out,
    rst=>d_locked_not,
    clk0=>clk0,
    clk180=>clk180,
    locked=>dll_running
    );

fast_lock<=dll_running;

  d_locked_not<=not(d_locked);

  u1: clkdll PORT MAP (
    clkin=>buf_clk,
    clkfb=>buf_clk2x,
    rst=>zero,
    clk0=>open,
  clk180=>open,
  clk2x=> clk2x,
    locked=>locked
    );

slow_lock<=locked;

    u4: ibufg port map(
  i => clk,
  o => buf_clk
  );

  u3: bufg PORT MAP(
    i =>clk2x,
    o =>buf_clk2x
    );

  u5: bufg PORT MAP(
    i =>clk0,
    o =>clk0_out
    );

  u6: bufg PORT MAP(
    i =>clk180,
    o =>clk180_out
    );

    u7: SRL16 port map(
   D => locked,
        CLK => buf_clk2x,
        A0  => one,
        A1  => one,
        A2  => one,
        A3  => one,
        Q   => d_locked
    );

 u8: SRL16 port map(
   D => reset_counter,
        CLK => buf_clk2x,
        A0  => one,
        A1  => one,
        A2  => zero,
        A3  => zero,
        Q   => reset_counter_delayed
    );


Article: 54248
Subject: Confused at Xilinx V2P OCM usage
From: clinton__bill@hotmail.com (bill)
Date: 5 Apr 2003 13:22:23 -0800
Links: << >>  << T >>  << A >>
Hi,
   I checked out the Xilinx V2P document and want to know its memory
achitechture. I am new to this, so when I read the OCM chapter, I am
totally confued here.
   In the PPC405 diagram, I see OCM cntlr is a sub-module of cache
unit on I- and D- sides. Besides the OCM cntlr, that is PLB interface.
Then I am confused at how to differentiate an address processor issues
is an OCM address or a PLB address. I see some description on OCM
cntlr section that OCM cntlr only uses 14 bit of the 32 bit wide
address line, then does that mean there is a special bit to different
whether an address is OCM or PLB?
   Maybe a silly question, totally lost here.

Article: 54249
Subject: Re: More FFT Questions
From: "Glen Herrmannsfeldt" <gah@ugcs.caltech.edu>
Date: Sat, 05 Apr 2003 21:45:15 GMT
Links: << >>  << T >>  << A >>

"Bob" <stenasc@yahoo.com> wrote in message
news:20540d3a.0304030624.123cca37@posting.google.com...
> Hi all,
>
> I decided to have a go at writing a small fft for learning purposes.
> (32 pts). I seem to be having problems with the fixed lengths for the
> data.
>
> The twiddle factors are 16 bits. Initially, for butterfly stage 1, I
> read in 16 bit input data (it has already been position reversed), do
> the multiplication by the twiddle factor (I know you don't need to
> multiply by the twiddle factor in the first stage as it is 1, but just
> to keep it consistant).This brings the results for the real and
> imaginary outputs  to 32 bits. I truncate the 32 bits to the most
> significant 24 bits and feed it to next butterfly stage, where I mult
> again by the 16 bit twiddle factor, as well doing the addition. Now I
> have 40 bit results for the real and imaginary outputs from this
> butterfly stage.
> Again I truncate to the 24 MS bits before next stage.

You should probably do the whole thing in fixed point in software before
doing it in hardware.

You need to figure out which bits the result is in after each stage.  You
must keep track of the binary point yourself, leave enough bits to guard
against overflow, yet not lose all the significant bits.  The twiddles
should have 1 significant bit to the left of the binary point, for example.
Also, twos complement multiplication is a little tricky.

-- glen





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search