Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 31025

Article: 31025
Subject: Re: Shannon Capacity - An Apology
From: nemo@dtgnet.com (Nemo)
Date: Wed, 09 May 2001 19:03:13 GMT
Links: << >>  << T >>  << A >>
By my calculations, I am afraid so.  

An egg-faced,
Nemo

"Kevin Neilson" <kevin_neilson@yahoo.com> wrote:

>Uh oh, does this leave Austin as the lone dissenter?
>
>
>"Nemo" <nemo@dtgnet.com> wrote in message
>news:3afa67dc.10129415@news.dtgnet.com...
>> I apologize to all those posting/reading the recent Shannon capacity
>thread.  I
>> was 100% wrong in my statements about parity bits.  Here is some
>background...
>>
>> There has been an on-going thread in this news group about Shannon
>capacity.
>> The original question was about the following equation:
>>
>> C = W*Log2(1-S/N) bps
>>
>> Does the caculated capacity C include parity bits of a coded channel?
>>
>> I now believe the answer is *no, it does not*.   Throughout Shannon's
>paper, C
>> refers to the *information rate only*.   The above equation shows the
>> relationship between the amount of error-free *information* throughput,
>> bandwidth and S/N.   The information throughput does *not* include the
>added
>> bits due to whatever coding scheme is chosen.  In the following typical
>system,
>> the above equation shows the limit to the total info in and out for a
>channel of
>> a given bandwidth and S/N. The challenge of the engineer is to design the
>> encode/decode and mod/demod functions so as to achieve this limit.
>>
>> info in -> encoding -> modulation -> channel -> demod -> decode -> info
>out
>>
>> Again, in my previous posts I mis-stated how "parity" bits are considered
>in the
>> above equation/system.  I apologize for any confusion.
>>
>> Good Day to all.
>>
>> Nemo
>>
>


Article: 31026
Subject: Re: Shannon Capacity - An Apology
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Wed, 09 May 2001 12:05:22 -0700
Links: << >>  << T >>  << A >>
Wow,

It amazes me how confused everyone is.

So if parity is not considered information, you don't need it.  OK, so I won't send
it.

It isn't part of the channel, so it never makes it to the other end.  Fine.

Never happened.

At the receive end, I have to parity bits,  Fine.  I'll just use the bits I have
received, and well, are they good?  I don't know....I can't check them....I don't
have any parity bits....

Austin

Nemo wrote:

> I apologize to all those posting/reading the recent Shannon capacity thread.  I
> was 100% wrong in my statements about parity bits.  Here is some background...
>
> There has been an on-going thread in this news group about Shannon capacity.
> The original question was about the following equation:
>
> C = W*Log2(1-S/N) bps
>
> Does the caculated capacity C include parity bits of a coded channel?
>
> I now believe the answer is *no, it does not*.   Throughout Shannon's paper, C
> refers to the *information rate only*.   The above equation shows the
> relationship between the amount of error-free *information* throughput,
> bandwidth and S/N.   The information throughput does *not* include the added
> bits due to whatever coding scheme is chosen.  In the following typical system,
> the above equation shows the limit to the total info in and out for a channel of
> a given bandwidth and S/N. The challenge of the engineer is to design the
> encode/decode and mod/demod functions so as to achieve this limit.
>
> info in -> encoding -> modulation -> channel -> demod -> decode -> info out
>
> Again, in my previous posts I mis-stated how "parity" bits are considered in the
> above equation/system.  I apologize for any confusion.
>
> Good Day to all.
>
> Nemo


Article: 31027
Subject: Re: Virtex-2 - experiences ?
From: "Tim" <tim@rockylogic.com.spamtrap>
Date: Wed, 9 May 2001 20:39:18 +0100
Links: << >>  << T >>  << A >>
Austin Lesea wrote in message <3AF9676B.6E94E35D@xilinx.com>...

>
>2V1000ES are here at the factory.  I don't know about the FF896, but I did
>evaluate one in the lab, and it is a really clean nice package (tons of
>connections to ground so it is really quiet for ground bounce -- love those
>flip chips!).

Austin

Where did you put the decoupling capacitors?

And have you ever seen an FF1517 package?  It sounds pretty scary :)






Article: 31028
Subject: Synplicity/Quicklogic choosing high drive input
From: Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de>
Date: 9 May 2001 20:14:26 GMT
Links: << >>  << T >>  << A >>
Hallo,

in a verilog base design with fixed I/O pins, synplicity chooses a 
dedicated input pin with high drive capability for an input with a 
load of 24. However in a constraints file I explicitly told to use a
standard IO pin and when mapping the device with the quicklogic 
mapper, things clash. I tried several hints. like 
/*synthesis syn_noclockbuf = 1*/ in the verilog module, but 
synplicity warned me about that options being ignored and things still
didn't work.

Can anybody give a hint how to stop synplicity to choose a dedicated 
input pin for an input with high load?

Thanks
-- 
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Article: 31029
Subject: Re: Synplicity/Quicklogic choosing high drive input
From: Alan Nishioka <alann@accom.com>
Date: Wed, 09 May 2001 13:56:29 -0700
Links: << >>  << T >>  << A >>
Uwe Bonnes wrote:

> I tried several hints. like
> /*synthesis syn_noclockbuf = 1*/ in the verilog module, but
> synplicity warned me about that options being ignored and things still
> didn't work.

Did you make sure to put the comment *before* the semicolon at the end of
the line?  If you don't, the option is ignored.

Alan Nishioka
alan@nishioka.com



Article: 31030
Subject: Re: Good VHDL/synthesis book
From: Jakab Tanko <jtanko@ics-ltd.com>
Date: Wed, 09 May 2001 17:38:51 -0400
Links: << >>  << T >>  << A >>
"The Designer's Guide to VHDL" by Peter J. Ashenden

Su We wrote:

> Hello,
> I am looking for a VHDL/synthesis book.
> One similar to "Numerical methods in C"
>
> I looked at VHDL coding styles by Cohen, but I want to know if anyone had
> other suggestions
>
> SW


Article: 31031
Subject: Re: Shannon Capacity - An Apology
From: Davis Moore <dmoore@nospamieee.org>
Date: Wed, 09 May 2001 15:39:55 -0600
Links: << >>  << T >>  << A >>
I disagree with your statements. I believe the answer lies clearly in the
equation itself.

C = W*Log2(1-S/N) bps

Note the units of the information rate is bits per second. I have
no idea how you can say that parity bits are factored out of
this equation. Are the parity bits not transmitted through the same bandwidth
limited channel as the bits that you are calling *information bits*?
Do they not take the same amount of time to be transmitted
through the channel? Are the parity bits not required to have
the same average signal power? Are the parity bits immune
to perturbations in the channel (noise power)?

It seems as though everyone is taking an abstract view of a theorem
that is based very strongly in physics.

It seems that everyone is creating a convention that encoding bits are not part
of the information rate - fine, if you want to do that, but the transmission
channel is still going to be filled by a maximum number of bits per second
governed by the Shannon-Hartley Theorem.

Bits per second is still bits per second. Anything you do with the information
after transmission is your interpretation of the information and is not
covered by the theorem.

A meaningful information data rate can be derived as
Cmeaningful = Ctotal - Cencoding.


Nemo wrote:

> I apologize to all those posting/reading the recent Shannon capacity thread.  I
> was 100% wrong in my statements about parity bits.  Here is some background...
>
> There has been an on-going thread in this news group about Shannon capacity.
> The original question was about the following equation:
>
> C = W*Log2(1-S/N) bps
>
> Does the caculated capacity C include parity bits of a coded channel?
>
> I now believe the answer is *no, it does not*.   Throughout Shannon's paper, C
> refers to the *information rate only*.   The above equation shows the
> relationship between the amount of error-free *information* throughput,
> bandwidth and S/N.   The information throughput does *not* include the added
> bits due to whatever coding scheme is chosen.  In the following typical system,
> the above equation shows the limit to the total info in and out for a channel of
> a given bandwidth and S/N. The challenge of the engineer is to design the
> encode/decode and mod/demod functions so as to achieve this limit.
>
> info in -> encoding -> modulation -> channel -> demod -> decode -> info out
>
> Again, in my previous posts I mis-stated how "parity" bits are considered in the
> above equation/system.  I apologize for any confusion.
>
> Good Day to all.
>
> Nemo


Article: 31032
Subject: Xilinx : 1553 interface
From: "Stan Ramsden" <stan.ramsden@avnet.com>
Date: Wed, 9 May 2001 14:57:56 -0700
Links: << >>  << T >>  << A >>
Has anyone done a 1553 serial bus interface that they would like to share??

Thanks,

Stan Ramsden

Article: 31033
Subject: Re: Virtex-2 - experiences ?
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Wed, 09 May 2001 15:13:16 -0700
Links: << >>  << T >>  << A >>
Tim,

Good question.

We placed the bypass caps on the other side of the pcb, within 0.4" from the
outermost rings of pads for the package.  We used planes for ground, Vccint, and
Vcco's (sort of an octal slice for the eight banks - we needed to test each bank
separately), and one of the slices is for 3.3 V so it also goes to the Vccaux
pins.  0.01uF and 0.001uF caps were alternated for power/ground pin pairs.

We didn't have a lot of IO's (lab pcb), and it wasn't designed for running a
design that was bigger than about 40% of the part, so it wasn't as busy as it
could be.

In the larger pattern packages, you can lose the interior set of balls to pcb on
the backside for the ground pins (ground pins inside a ring of ground pins carry
no current at all -- obvious to electric field solver people, not so obvious to
people who think of DC currents as going through resistances) and insert the
Vccint bypass caps in this area.

FF1517 packages are for professional drivers only, on a closed course.

Or another way of putting it: use IBIS modeling, floorplan your IO's, perform
the power analysis, design the heatsinking, and use controlled impedance trace
layout, and make sure you adhere to the SSO guidelines, and leave some IO's
uncommitted in case they need to be used to provide additional grounds,
testpoints, or guard traces.

Of course, if you want to succeed the first time every time, you should take all
of the above steps for each design, regardless of package size.

Austin

Tim wrote:

> Austin Lesea wrote in message <3AF9676B.6E94E35D@xilinx.com>...
>
> >
> >2V1000ES are here at the factory.  I don't know about the FF896, but I did
> >evaluate one in the lab, and it is a really clean nice package (tons of
> >connections to ground so it is really quiet for ground bounce -- love those
> >flip chips!).
>
> Austin
>
> Where did you put the decoupling capacitors?
>
> And have you ever seen an FF1517 package?  It sounds pretty scary :)


Article: 31034
Subject: Re: Shannon Capacity - An Apology
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Wed, 09 May 2001 15:34:34 -0700
Links: << >>  << T >>  << A >>

--------------8894F1576992B8F3073D4480
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Davis,

They are not listening.  I should not have fallen back in again.  You clearly
understand the definition of a channel.  They would prefer you to 'vote', or to say
who is 'right' or 'wrong' than discuss the issues.

Shannon drew Venn diagrams for his proofs ("Communication in the Presence of Noise,"
1940), and was challenged because the proof was not "mathematical" enough.  I liked
the geometrical proofs as they were obvious.  Of course, that troubled the
mathemeticians who wanted to have a lock on the knowledge, and did not like this idea
that anything "that complex" could be stated and then proved so elegantly.

Austin

Davis Moore wrote:

> I disagree with your statements. I believe the answer lies clearly in the
> equation itself.
>
> C = W*Log2(1-S/N) bps
>
> Note the units of the information rate is bits per second. I have
> no idea how you can say that parity bits are factored out of
> this equation. Are the parity bits not transmitted through the same bandwidth
> limited channel as the bits that you are calling *information bits*?

Hear hear!

>
> Do they not take the same amount of time to be transmitted
> through the channel? Are the parity bits not required to have
> the same average signal power?

Agreed!

> Are the parity bits immune
> to perturbations in the channel (noise power)?

Nope!  They are not.

>
> It seems as though everyone is taking an abstract view of a theorem
> that is based very strongly in physics.
>
> It seems that everyone is creating a convention that encoding bits are not part
> of the information rate - fine, if you want to do that, but the transmission
> channel is still going to be filled by a maximum number of bits per second
> governed by the Shannon-Hartley Theorem.
>
> Bits per second is still bits per second. Anything you do with the information
> after transmission is your interpretation of the information and is not
> covered by the theorem.
>
> A meaningful information data rate can be derived as
> Cmeaningful = Ctotal - Cencoding.

More appropriately, Conlywhat i reallywanted=Ctotal-Cencoding

I would argue that Cmeaningful = Ctotal

Otherwise I can't get Cmeaningful without also having Cencoding! QED, Shannon.

>
>
> Nemo wrote:
>
> > I apologize to all those posting/reading the recent Shannon capacity thread.  I
> > was 100% wrong in my statements about parity bits.  Here is some background...
> >
> > There has been an on-going thread in this news group about Shannon capacity.
> > The original question was about the following equation:
> >
> > C = W*Log2(1-S/N) bps
> >
> > Does the caculated capacity C include parity bits of a coded channel?
> >
> > I now believe the answer is *no, it does not*.   Throughout Shannon's paper, C
> > refers to the *information rate only*.   The above equation shows the
> > relationship between the amount of error-free *information* throughput,
> > bandwidth and S/N.   The information throughput does *not* include the added
> > bits due to whatever coding scheme is chosen.  In the following typical system,
> > the above equation shows the limit to the total info in and out for a channel of
> > a given bandwidth and S/N. The challenge of the engineer is to design the
> > encode/decode and mod/demod functions so as to achieve this limit.
> >
> > info in -> encoding -> modulation -> channel -> demod -> decode -> info out
> >
> > Again, in my previous posts I mis-stated how "parity" bits are considered in the
> > above equation/system.  I apologize for any confusion.
> >
> > Good Day to all.
> >
> > Nemo

--------------8894F1576992B8F3073D4480
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
<html>
Davis,
<p>They are not listening.&nbsp; I should not have fallen back in again.&nbsp;
You clearly understand the definition of a channel.&nbsp; They would prefer
you to 'vote', or to say who is 'right' or 'wrong' than discuss the issues.
<p>Shannon drew Venn diagrams for his proofs ("Communication in the Presence
of Noise," 1940), and was challenged because the proof was not "mathematical"
enough.&nbsp; I liked the geometrical proofs as they were obvious.&nbsp;
Of course, that troubled the mathemeticians who wanted to have a lock on
the knowledge, and did not like this idea that anything "that complex"
could be stated and then proved so elegantly.
<p>Austin
<p>Davis Moore wrote:
<blockquote TYPE=CITE>I disagree with your statements. I believe the answer
lies clearly in the
<br>equation itself.
<p>C = W*Log2(1-S/N) bps
<p>Note the units of the information rate is bits per second. I have
<br>no idea how you can say that parity bits are factored out of
<br>this equation. Are the parity bits not transmitted through the same
bandwidth
<br>limited channel as the bits that you are calling *information bits*?</blockquote>
Hear hear!
<blockquote TYPE=CITE>&nbsp;
<br>Do they not take the same amount of time to be transmitted
<br>through the channel? Are the parity bits not required to have
<br>the same average signal power?</blockquote>
Agreed!
<blockquote TYPE=CITE>Are the parity bits immune
<br>to perturbations in the channel (noise power)?</blockquote>
Nope!&nbsp; They are not.
<blockquote TYPE=CITE>&nbsp;
<br>It seems as though everyone is taking an abstract view of a theorem
<br>that is based very strongly in physics.
<p>It seems that everyone is creating a convention that encoding bits are
not part
<br>of the information rate - fine, if you want to do that, but the transmission
<br>channel is still going to be filled by a maximum number of bits per
second
<br>governed by the Shannon-Hartley Theorem.
<p>Bits per second is still bits per second. Anything you do with the information
<br>after transmission is your interpretation of the information and is
not
<br>covered by the theorem.
<p>A meaningful information data rate can be derived as
<br>Cmeaningful = Ctotal - Cencoding.</blockquote>
More appropriately, Conlywhat i reallywanted=Ctotal-Cencoding
<p>I would argue that Cmeaningful = Ctotal
<p>Otherwise I can't get <b><i>Cmeaningful without also having Cencoding!
QED, Shannon.</i></b>
<blockquote TYPE=CITE>&nbsp;
<p>Nemo wrote:
<p>> I apologize to all those posting/reading the recent Shannon capacity
thread.&nbsp; I
<br>> was 100% wrong in my statements about parity bits.&nbsp; Here is
some background...
<br>>
<br>> There has been an on-going thread in this news group about Shannon
capacity.
<br>> The original question was about the following equation:
<br>>
<br>> C = W*Log2(1-S/N) bps
<br>>
<br>> Does the caculated capacity C include parity bits of a coded channel?
<br>>
<br>> I now believe the answer is *no, it does not*.&nbsp;&nbsp; Throughout
Shannon's paper, C
<br>> refers to the *information rate only*.&nbsp;&nbsp; The above equation
shows the
<br>> relationship between the amount of error-free *information* throughput,
<br>> bandwidth and S/N.&nbsp;&nbsp; The information throughput does *not*
include the added
<br>> bits due to whatever coding scheme is chosen.&nbsp; In the following
typical system,
<br>> the above equation shows the limit to the total info in and out for
a channel of
<br>> a given bandwidth and S/N. The challenge of the engineer is to design
the
<br>> encode/decode and mod/demod functions so as to achieve this limit.
<br>>
<br>> info in -> encoding -> modulation -> channel -> demod -> decode ->
info out
<br>>
<br>> Again, in my previous posts I mis-stated how "parity" bits are considered
in the
<br>> above equation/system.&nbsp; I apologize for any confusion.
<br>>
<br>> Good Day to all.
<br>>
<br>> Nemo</blockquote>
</html>

--------------8894F1576992B8F3073D4480--


Article: 31035
Subject: Nallatech Products
From: "Dave Feustel" <dfeustel@mindspring.com>
Date: Wed, 9 May 2001 17:38:57 -0500
Links: << >>  << T >>  << A >>
Since I have to ask what the price is,
I probably can't afford it, but what the heck:

What is the price range of Nallatech products
like Benblue and the DIME-II cPCI carrier?
( http://www.nallatech.com )



Article: 31036
Subject: Re: Virtex-2 - experiences ?
From: Rick Filipkiewicz <rick@algor.co.uk>
Date: Wed, 09 May 2001 23:42:23 +0100
Links: << >>  << T >>  << A >>


Austin Lesea wrote:
> 
> Rick,
> 
> 2V1000ES are here at the factory.  I don't know about the FF896, but I did
> evaluate one in the lab, and it is a really clean nice package (tons of
> connections to ground so it is really quiet for ground bounce -- love those
> flip chips!).  It is -5, even though ES is not "graded".  The -5 for 2V is
> faster than the -8 for Virtex E.  As an example, in Virtex E a design with
> 155 MHz global clocks is getting agressive, and constraints may get tough
> to meet in a large design.  In Virtex II, we see the same at 311 MHz.  That
> isn't to say that with careful tweaking and floor planning you can not do
> better in either.
> 
> By the way, there are 2V40's in fg256 as ES around, too.
> 
> Get to a disti or FAE.
> 
> Austin
> 
> 

Sounds to me like it will be worth struggling pretty hard to get some of
these parts.
I really need the FF896 since I need a lot of IOs. The IO/CLB balance of
the Virtex-E is closer to our general need but if the speed equation
extends to V2-4 <=> V-E-7 then I'll gladly pay the extra price.

I'm using 2.1i at the moment so do you know if its possible to get 3.3i
on a short term eval so I can do what Kolja did & re-target our existing
XCV400E/600E design.

Article: 31037
Subject: Re: Virtex-2 - experiences ?
From: Vikram Pasham <Vikram.Pasham@xilinx.com>
Date: Wed, 09 May 2001 16:36:42 -0700
Links: << >>  << T >>  << A >>
Rick,

You can get a free eval version of 3.3i  from
http://support.xilinx.com/products/software/ise_eval.htm


-Vikram
Xilinx Applications

Rick Filipkiewicz wrote:

>
> Sounds to me like it will be worth struggling pretty hard to get some of
> these parts.
> I really need the FF896 since I need a lot of IOs. The IO/CLB balance of
> the Virtex-E is closer to our general need but if the speed equation
> extends to V2-4 <=> V-E-7 then I'll gladly pay the extra price.
>
> I'm using 2.1i at the moment so do you know if its possible to get 3.3i
> on a short term eval so I can do what Kolja did & re-target our existing
> XCV400E/600E design.


Article: 31038
Subject: Re: Virtex-2 - experiences ?
From: "Tim" <tim@rockylogic.com.spamtrap>
Date: Thu, 10 May 2001 00:47:35 +0100
Links: << >>  << T >>  << A >>

Austin Lesea wrote in message <3AF9C0FC.E2976FF4@xilinx.com>...

>We placed the bypass caps on the other side of the pcb, within 0.4" from
the
>outermost rings of pads for the package.  We used planes for ground,
Vccint, and
>Vcco's (sort of an octal slice for the eight banks - we needed to test each
bank
>separately), and one of the slices is for 3.3 V so it also goes to the
Vccaux
>pins.  0.01uF and 0.001uF caps were alternated for power/ground pin pairs.

I guess the 0508 caps (or 0306?) could be useful here.  They seem to have
much better performance than 0805/0603 types.  Are they freely available
in distribution?  I could not see them in Digi-Key, but maybe I didn't
look in the right place.





Article: 31039
Subject: Re: Virtex-2 - experiences ?
From: Rick Filipkiewicz <rick@algor.co.uk>
Date: Thu, 10 May 2001 00:59:47 +0100
Links: << >>  << T >>  << A >>


Austin Lesea wrote:
> 
> Tim,
> 
> 

<snip>

> 
> In the larger pattern packages, you can lose the interior set of balls to pcb on
> the backside for the ground pins (ground pins inside a ring of ground pins carry
> no current at all -- obvious to electric field solver people, not so obvious to
> people who think of DC currents as going through resistances) and insert the
> Vccint bypass caps in this area.
> 

A1 right. The inner ones are just a component of what I call h/w voodoo.
However if that's the case & you know it then how come the inner GND
balls are there ? Could you maybe arrange a demonstration to your
packaging folks by scraping off the inner balls on the package and the
PCB pads and then stuffing the device with the noisiest logic you can
manage.

> FF1517 packages are for professional drivers only, on a closed course.
> 

They's have to be given the cost of the devices that live inside the
FF1517.

BTW Austin I've been following the ``debate'' on Shannon's Thm & you are
absolutely right.
I remember reading it a long time ago & realising quite quickly that on
the surface it seems easy [not the proof but the statement] in fact its
a very subtle thing. But then its all about Entropy & people have been
confused & confusing about that since Carnot.

Article: 31040
Subject: Re: Synplicity/Quicklogic choosing high drive input
From: Ken McElvain <ken@synplicity.com>
Date: Wed, 09 May 2001 18:50:16 -0700
Links: << >>  << T >>  << A >>
input foo /* synthesis qln_padtype="normal" */;
See the help file for examples.

Uwe Bonnes wrote:
> 
> Hallo,
> 
> in a verilog base design with fixed I/O pins, synplicity chooses a
> dedicated input pin with high drive capability for an input with a
> load of 24. However in a constraints file I explicitly told to use a
> standard IO pin and when mapping the device with the quicklogic
> mapper, things clash. I tried several hints. like
> /*synthesis syn_noclockbuf = 1*/ in the verilog module, but
> synplicity warned me about that options being ignored and things still
> didn't work.
> 
> Can anybody give a hint how to stop synplicity to choose a dedicated
> input pin for an input with high load?
> 
> Thanks
> --
> Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de
> 
> Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
> --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

-- 
Ken McElvain, CTO
Synplicity Inc.
(408)215-6060

Article: 31041
Subject: Re: Synplicity/Quicklogic choosing high drive input
From: Ken McElvain <ken@synplicity.com>
Date: Wed, 09 May 2001 18:51:49 -0700
Links: << >>  << T >>  << A >>
One more try (typo):

input foo /* synthesis ql_padtype="normal" */;
See the help file for examples.

Ken McElvain wrote:
> 
> input foo /* synthesis qln_padtype="normal" */;
> See the help file for examples.
> 
> Uwe Bonnes wrote:
> >
> > Hallo,
> >
> > in a verilog base design with fixed I/O pins, synplicity chooses a
> > dedicated input pin with high drive capability for an input with a
> > load of 24. However in a constraints file I explicitly told to use a
> > standard IO pin and when mapping the device with the quicklogic
> > mapper, things clash. I tried several hints. like
> > /*synthesis syn_noclockbuf = 1*/ in the verilog module, but
> > synplicity warned me about that options being ignored and things still
> > didn't work.
> >
> > Can anybody give a hint how to stop synplicity to choose a dedicated
> > input pin for an input with high load?
> >
> > Thanks
> > --
> > Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de
> >
> > Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
> > --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------
> 
> --
> Ken McElvain, CTO
> Synplicity Inc.
> (408)215-6060

-- 
Ken McElvain, CTO
Synplicity Inc.
(408)215-6060

Article: 31042
Subject: Important news
From: <ewcce@aol.com>
Date: 10 May 2001 02:16:47 GMT
Links: << >>  << T >>  << A >>
Important new medical news for those suffering with genital herpes, cold sores and shingles!
http://crazydude.com/newmedic 

Article: 31043
Subject: Re: Xilinx Constraints Editor ?
From: John_H <johnhandwork@mail.com>
Date: Wed, 9 May 2001 19:28:13 -0700
Links: << >>  << T >>  << A >>
Sometimes the constraints editor leaves me a little flat so I've generally stuck to editing the constraints file by hand.  The OFFSET IN AFTER and OFFSET OUT BEFORE syntaxes are completely valid (according to the nice little powerpoint presentation on 3.1i Constraints I have sitting around).

Check out the page from the online software doc:

http://toolbox.xilinx.com/docsan/3_1i/data/common/dev/chap06/dev06006.htm

So if the editor can't get you where you want to be, just open it up with a text editor and make your own changes.  You should still be able to edit things fine afterwards.

Article: 31044
Subject: Re: Virtex-2 - experiences ?
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Wed, 09 May 2001 21:38:50 -0700
Links: << >>  << T >>  << A >>
Don't know about avail of cap types,

I would be interested if there is still a shortage of discretes out there, or if
they solved that problem.

Austin

Tim wrote:

> Austin Lesea wrote in message <3AF9C0FC.E2976FF4@xilinx.com>...
>
> >We placed the bypass caps on the other side of the pcb, within 0.4" from
> the
> >outermost rings of pads for the package.  We used planes for ground,
> Vccint, and
> >Vcco's (sort of an octal slice for the eight banks - we needed to test each
> bank
> >separately), and one of the slices is for 3.3 V so it also goes to the
> Vccaux
> >pins.  0.01uF and 0.001uF caps were alternated for power/ground pin pairs.
>
> I guess the 0508 caps (or 0306?) could be useful here.  They seem to have
> much better performance than 0805/0603 types.  Are they freely available
> in distribution?  I could not see them in Digi-Key, but maybe I didn't
> look in the right place.


Article: 31045
Subject: Re: Shannon Capacity - An Apology
From: Bob Perlman <bob@cambriandesign.com>
Date: Thu, 10 May 2001 04:43:53 GMT
Links: << >>  << T >>  << A >>
On Wed, 09 May 2001 12:05:22 -0700, Austin Lesea
<austin.lesea@xilinx.com> wrote:

>Wow,
>
>It amazes me how confused everyone is.
>
>So if parity is not considered information, you don't need it.  OK, so I won't send
>it.
>
>It isn't part of the channel, so it never makes it to the other end.  Fine.
>
>Never happened.
>
>At the receive end, I have to parity bits,  Fine.  I'll just use the bits I have
>received, and well, are they good?  I don't know....I can't check them....I don't
>have any parity bits....

I've been home with the flu 5 days.  Having lost all will to live, I
thought I'd jump into this thread.  I think that including the
parity/ECC bits in the max symbol rate is wrong, and leads to a
logical contradiction.

Let's suppose I have a link on which I'm transmitting B symbols per
second.  Unfortunately, the bandwidth is too narrow or the S/N too
small to support error-free transmission.  So I start converting some
of the bits to ECC, until the bit error rate reaches an acceptable
level.  At this point I'm still transmitting B symbols per second, but
only X of them are my message, the other E of them are ECC.  (B = X +
E)

At this point I say, hey, all of those symbols are information of some
sort, so B must be less than or equal to Shannon's limit.  (If I've
used a good coding scheme, it's not too far away from that limit; I
don't know what state of the art is for trellis coded modulation or
turbo codes, but it's pretty good.)  If B really *is* the Shannon
limit, I should be able to convert those ECC symbols back into message
symbols.  But I've already proved that I can't.  So Shannon must be
telling us the number of *message* symbols is what's being predicted
by his equation.  Otherwise, Shannon's limit tells us nothing about
maximum message rate on a channel, because we may have to add lots of
ECC bits to get the message through.  In other words, Shannon's limit
either applies to message bits only, or it's worthless.

The problem with the arguments being used agains this is the
conflation of "useful" and "information."  ECC bits are useful; they
are not information.

Call me wrong, call me names, whatever.  You can't make me feel any
worse.

Bob Perlman 

Article: 31046
Subject: Re: Virtex-2 - experiences ?
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Wed, 09 May 2001 21:46:04 -0700
Links: << >>  << T >>  << A >>

--------------4B1E8BED82B31BF97F887631
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Aaaaah!

Shannon found me here on this thread!  Entropy lives!

Just kidding, thanks.

The balls in the center pattern can carry off some significant heat, so having a
pattern to solder them to, that isn't even electrically connected, takes away heat
(really! or so the packaging folks tell me -- I am thermally challenged so don't ask
me any heat questions...).

Well, the 1517 and 1132 packages certainly provide a lot of capabilty.

I was on a call with a customer, and I asked why they had five asynchronous clocks.
He said, "well, it is five separate designs.."  I asked him why he had five completely
unrelated designs in the chip.  He replied, "cause it fits."

He further commented, "I don't know about 'system on a chip' but you enable us to make
'systems' on a chip."

Austin



Rick Filipkiewicz wrote:

> Austin Lesea wrote:
> >
> > Tim,
> >
> >
>
> <snip>
>
> >
> > In the larger pattern packages, you can lose the interior set of balls to pcb on
> > the backside for the ground pins (ground pins inside a ring of ground pins carry
> > no current at all -- obvious to electric field solver people, not so obvious to
> > people who think of DC currents as going through resistances) and insert the
> > Vccint bypass caps in this area.
> >
>
> A1 right. The inner ones are just a component of what I call h/w voodoo.
> However if that's the case & you know it then how come the inner GND
> balls are there ? Could you maybe arrange a demonstration to your
> packaging folks by scraping off the inner balls on the package and the
> PCB pads and then stuffing the device with the noisiest logic you can
> manage.
>
> > FF1517 packages are for professional drivers only, on a closed course.
> >
>
> They's have to be given the cost of the devices that live inside the
> FF1517.
>
> BTW Austin I've been following the ``debate'' on Shannon's Thm & you are
> absolutely right.
> I remember reading it a long time ago & realising quite quickly that on
> the surface it seems easy [not the proof but the statement] in fact its
> a very subtle thing. But then its all about Entropy & people have been
> confused & confusing about that since Carnot.

--------------4B1E8BED82B31BF97F887631
Content-Type: text/html; charset=us-ascii
Content-Transfer-Encoding: 7bit

<!doctype html public "-//w3c//dtd html 4.0 transitional//en">
<html>
Aaaaah!
<p>Shannon found me here on this thread!&nbsp; Entropy lives!
<p>Just kidding, thanks.
<p>The balls in the center pattern can carry off some significant heat,
so having a pattern to solder them to, that isn't even electrically connected,
takes away heat (really! or so the packaging folks tell me -- I am thermally
challenged so don't ask me any heat questions...).
<p>Well, the 1517 and 1132 packages certainly provide a lot of capabilty.
<p>I was on a call with a customer, and I asked why they had five asynchronous
clocks.&nbsp; He said, "well, it is five separate designs.."&nbsp; I asked
him why he had five completely unrelated designs in the chip.&nbsp; He
replied, "cause it fits."
<p>He further commented, "I don't know about 'system on a chip' but you
enable us to make 'system<b>s</b>' on a chip."
<p>Austin
<br>&nbsp;
<br>&nbsp;
<p>Rick Filipkiewicz wrote:
<blockquote TYPE=CITE>Austin Lesea wrote:
<br>>
<br>> Tim,
<br>>
<br>>
<p>&lt;snip>
<p>>
<br>> In the larger pattern packages, you can lose the interior set of
balls to pcb on
<br>> the backside for the ground pins (ground pins inside a ring of ground
pins carry
<br>> no current at all -- obvious to electric field solver people, not
so obvious to
<br>> people who think of DC currents as going through resistances) and
insert the
<br>> Vccint bypass caps in this area.
<br>>
<p>A1 right. The inner ones are just a component of what I call h/w voodoo.
<br>However if that's the case &amp; you know it then how come the inner
GND
<br>balls are there ? Could you maybe arrange a demonstration to your
<br>packaging folks by scraping off the inner balls on the package and
the
<br>PCB pads and then stuffing the device with the noisiest logic you can
<br>manage.
<p>> FF1517 packages are for professional drivers only, on a closed course.
<br>>
<p>They's have to be given the cost of the devices that live inside the
<br>FF1517.
<p>BTW Austin I've been following the ``debate'' on Shannon's Thm &amp;
you are
<br>absolutely right.
<br>I remember reading it a long time ago &amp; realising quite quickly
that on
<br>the surface it seems easy [not the proof but the statement] in fact
its
<br>a very subtle thing. But then its all about Entropy &amp; people have
been
<br>confused &amp; confusing about that since Carnot.</blockquote>
</html>

--------------4B1E8BED82B31BF97F887631--


Article: 31047
Subject: Re: Virtex-2 - experiences ?
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Wed, 09 May 2001 21:47:34 -0700
Links: << >>  << T >>  << A >>
Rick,

I would call your local FAE.

The SSO guidelines reflect the difference between the fg256 and the other
packages.

Austin

Rick Filipkiewicz wrote:

> Austin Lesea wrote:
> >
> > Rick,
> >
> > 2V1000ES are here at the factory.  I don't know about the FF896, but I did
> > evaluate one in the lab, and it is a really clean nice package (tons of
> > connections to ground so it is really quiet for ground bounce -- love those
> > flip chips!).  It is -5, even though ES is not "graded".  The -5 for 2V is
> > faster than the -8 for Virtex E.  As an example, in Virtex E a design with
> > 155 MHz global clocks is getting agressive, and constraints may get tough
> > to meet in a large design.  In Virtex II, we see the same at 311 MHz.  That
> > isn't to say that with careful tweaking and floor planning you can not do
> > better in either.
> >
> > By the way, there are 2V40's in fg256 as ES around, too.
> >
> > Get to a disti or FAE.
> >
> > Austin
> >
> >
>
> Sounds to me like it will be worth struggling pretty hard to get some of
> these parts.
> I really need the FF896 since I need a lot of IOs. The IO/CLB balance of
> the Virtex-E is closer to our general need but if the speed equation
> extends to V2-4 <=> V-E-7 then I'll gladly pay the extra price.
>
> I'm using 2.1i at the moment so do you know if its possible to get 3.3i
> on a short term eval so I can do what Kolja did & re-target our existing
> XCV400E/600E design.


Article: 31048
Subject: Re: Good VHDL/synthesis book
From: cyber_spook <pjc@cyberspook.freeserve.co.uk>
Date: Thu, 10 May 2001 06:08:40 +0100
Links: << >>  << T >>  << A >>
Very good !!

cyber_spook

Jakab Tanko wrote:

> "The Designer's Guide to VHDL" by Peter J. Ashenden
>
> Su We wrote:
>
> > Hello,
> > I am looking for a VHDL/synthesis book.
> > One similar to "Numerical methods in C"
> >
> > I looked at VHDL coding styles by Cohen, but I want to know if anyone had
> > other suggestions
> >
> > SW


Article: 31049
Subject: Re: Shannon Capacity
From: Muzaffer Kal <muzaffer@dspia.com>
Date: Thu, 10 May 2001 07:51:57 GMT
Links: << >>  << T >>  << A >>
On Mon, 07 May 2001 18:46:20 -0700, Vikram Pasham
<Vikram.Pasham@xilinx.com> wrote:

>Berni & all,
>
>One of Shannon's paper "A Mathematical Theory of Communication" can be found on
>the web at
>http://cm.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf
>
>As per my understanding, "C" in Shanon's equation includes information +
>parity.

finally I read this paper and I'd like to submist my 2. Please look
at figure 8. It talks about a transmitter, channel, receiver and
correction data. Theorem 10 talks about the capacity of a correction
channel. The idea is if the correction channel has capacity of Hy(x)
then all the capacity of the normal channel (C) can be utilized
because correction channel allows all errors to be corrected. So o
page 22, the channel capacity is defined as C = max(H(x) - Hy(x)). In
other words, if the correction channel is used the full capacity of
the original channel is utilized. If correction data is transmitted in
the channel itself, the capacity of the channel drops so the capacity
of a real channel doesn't include the correction data.
Comments please.

Muzaffer

FPGA DSP Consulting
http://www.dspia.com



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search