1994 Jul Aug Sep Oct Nov Dec 1994 1995 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1995 1996 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1996 1997 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1997 1998 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1998 1999 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 1999 2000 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2000 2001 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2001 2002 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2002 2003 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2003 2004 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2004 2005 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2005 2006 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2006 2007 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2007 2008 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2008 2009 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2009 2010 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2010 2011 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2011 2012 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2012 2013 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2013 2014 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2014 2015 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2015 2016 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2016 2017 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2017 2018 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2018 2019 Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2019 2020 Jan Feb Mar Apr May 2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

# Messages from 23700

Article: 23700
Subject: Re: VHDL code for LFSR
From: Colin Marquardt <colin.marquardt@usa.alcatel.com>
Date: 05 Jul 2000 15:52:02 -0700
Links: << >>  << T >>  << A >>
* Srinivasan Venkataramanan  writes:

>> process. The variable "feedback" is assigned a value inside the if
>> clock'EVENT statement which seems to me will generate an additional,
>> unwanted FF. The only FFs in the design should be LFSR(0:3). Am I right
>> or is this code fine?
>>

>   I think the code is JUST FINE :-) Since the "feedback" is a VARIABLE and
> not a SIGNAL
>  it shouldn't give you extra FF.

And to make it clear for newbies: the variable does not give a
register in this case because it is assigned a value before it
is read. The other way round (read before it is assigned), it
would have to become a FF because it will have to remember the
old value so that you can read it.

In short, variables are only *candidates* for FFs, signals always
*are* FFs.

Colin

Article: 23701
Subject: Re: VHDL code for LFSR
From: Jan Decaluwe <jand@easics.be>
Date: Wed, 05 Jul 2000 23:00:27 GMT
Links: << >>  << T >>  << A >>
Rickman wrote:
>
> The variable "feedback" is assigned a value inside the if
> clock'EVENT statement which seems to me will generate an additional,
> unwanted FF. The only FFs in the design should be LFSR(0:3). Am I right
> or is this code fine?
>
>   process(clock)
>     variable feedback: bit;
>   begin
>     if clock'EVENT and clock='1' then
>       feedback := LFSR(3);
>       LFSR(0) <= feedback;
>       LFSR(1) <= LFSR(0) xor feedback;
>       LFSR(2) <= LFSR(1);
>       LFSR(3) <= LFSR(2);
>     end if;
>     q <= LFSR;
>   end process;
> end;

The variable "feedback" doesn't require a FF because it
is always assigned before it is used in the process.
However, the assignment to signal q in the process is
questionable (note that LFSR is not in the sensitivity list)
and would typically be placed outside the process as
a concurrent signal assignment.

Regards, Jan

--
Jan Decaluwe	       Easics
Design Manager         System-on-Chip design services
+32-16-395 600	       Interleuvenlaan 86, B-3001 Leuven, Belgium
mailto:jand@easics.be  http://www.easics.com

Article: 23702
Subject: Re: How to augment the output of a Xilinx lfsr in verilog??
From: Ray Andraka <ray@andraka.com>
Date: Wed, 05 Jul 2000 23:17:53 GMT
Links: << >>  << T >>  << A >>
All you need to do is logically AND all but the last bit (farthest from shift
register input) and use that to invert the sense of the normal feedback.  That
will insert an all '1's state between the all but last '1's and all but first '1's
state.  For example, one implementation of a 4 bit LFSR has XNOR feed back from
bit 2 and 3 to generate a 15 state sequence:
0000
0001
0011
0111
****
1110
1101
1011
0110
1100
1001
0010
0101
1010
0100
1000
0000

modifying the feedback from !X(3)^X(2)   to !X3^X2^(X2*X1*X0) will cause the shift
input to be inverted when the current state is 0111 or 1111.  This inserts the
1111 state and provides recovery from the otherwise illegal 1111 state.  It works
for arbitrarily sized LFSRs, but does require a decode of all the bits.  If you
can accept more area, you make the decode fast by using a second shift register
that is cleared when the input is '0' and shifts a '1' in otherwise.  When the far
end reaches 1, the primary shift register is all '1's and you have avoided a wide
AND.  For larger LFSRs, that can more than double the clock rate compared to a
modified LFSR with the wide AND.

Rickman wrote:

> I don't think you need to stop the LFSR to do this, but I am pretty sure
> you will need to add a N bit wide decoder as others have suggested. If
> you use a sequence that never loads all FFs with a zero as a starting
> point, you can modify the circuit to include an extra zero by using the
> all zero FF state.
>
> Perhaps the easiest way to do that is to detect the state of all zeros
> ignoring the last bit (the one shifting out). Then feed this signal as
> another input to the XOR gate driving the input to the shift register.
> The only two state that will activate this signal are the "normal" state
> of all zeros with the last bit a one, and the "abnormal" or new state of
> all zeros. When the "normal" state comes up in the FFs, the detect
> circuit will invert the output of the XOR gate which should have been a
> one. Now the FFs will be loaded with all zeros and the next input will
> be a one which will generate the "normal" state of a one and the rest
> zeros which would have followed next in the original circuit.
>
> I won't attempt ascii art of this since it is very difficult and should
> not be attempted by the faint of heart (me!)
>
> Actually, I think what I just wrote was stated previously by Peter
> Alfke, and a bit more succinctly. Is there a reason that you didn't like
> this approach?
>
> est0@lehigh.edu wrote:
> >
> > Actually, what I need to do is quite simple. For reasons that I won't
> > go into here, we need a sequence with an even number of bits. I need
> > 4096 rather than 4095 bits. So, what we set up to do in our
> > preliminary simulation work is to take the sequence that we get from
> > our Gold code generator, which exclusive OR's two length 12 maximal
> > LSFR's to get 1 of many 4095 bit long Gold codes. (The exact sequence
> > we get is determined by the sequence that we load into one of the two
> > maximal length sequence generators. Then, no matter what the specific
> > 4095 bit pattern is, we want to add a 4096th bit, which we want to be
> > a 0. (We could have chosen a 1, but we chose a 0 and based our
> > reception algorithm on that, so I want to produce that sequence. In
> > other words, if we had a 4 stage generator, and the 15 output bits
> > were 000100110011110, we now want the output to be 00010011000111100,
> > and we want this to repeat until we tell the system to use a different
> > code.
> >
> > It may be that I can't do what I want to do without adding some
> > additional gates and stopping the PN generator for one bit, but I am
> > hoping that I can do it without having to do that.
> >
> > TIA,
> > Ed
> >
> > On Tue, 04 Jul 2000 04:25:25 GMT, Peter Alfke <palfke@earthlink.net>
> > wrote:
> >
> > >If, for whatever strange reason, you want to lengthen the sequence by its
> > >one missing count, you have no alternative but to have a wide AND gate that
> > >detects the state where all but the rightmost bit are zeros, and then,
> > >during this 2-bit event, inverts (XORs) the feedback, so that it includes
> > >the all-zero state.
> > >(I prefer to exclude the all-ones state, since Xilinx FPGAs naturally reset
> > >to zero, but this may be irrelevant nowadays).
> > >So, the cost is a wide parallel gate, which you, of course, can emulate
> > >with a sequential state machine, if you prefer.
> > >But again: why all this?
> > >
> > >Peter Alfke
> > >==========================================================
> > >Hal Murray wrote:
> > >
> > >> > almost does what I want. However, it, like all lsfr's, puts out 2^n-1
> > >> > states before it repeats. I need to augment or stall that output so
> > >> > that I add a 0 to the end of every sequence, so as to create sequences
> > >> > with a length of 2^n. I see all sorts of mention of how easy that is
> > >> > to do, but I can't figure it out, and nowhere is it explained. Does
> > >> > anyone know how to do what I want to do?
> > >>
> > >> What are you really trying to do?
> > >>
> > >> Note that the LFSR type circuits generate 1 bit at a time, not
> > >> a sequence of n bit wide words.
> > >>
> > >> It's pretty hard to distinguish the output of an LFSR from
> > >> the corresponding system that does include the all-0s state.
> > >> (It's just a single 0 bit in the output sequence.)
> > >>
> > >> If you are worried about the missing 0 unbalancing your
> > >> statistics, the simple fix is probably to use a bigger LFSR.
> > >>
> > >> If you want the all-zero word output, your first problem is
> > >> to get a clean sequence of words.  I think the output of an
> > >> LFSR is good if you step it N cycles to get an N bit word.
> > >> If you can't wait that long, you can use independent LFSRs
> > >> for each bit.  (You need to make sure they don't run in
> > >> lock step, perhaps by making them different lengths.)
> > >>
> > >> Using the bottom N bits of an N+k bit LFSR clocked N bits
> > >> between samples will give you (2^k)-1 all 0 words compared
> > >> to (2^k) samples of all other values.
> > >>
> > >> If it helps, you can turn things upside down with an inverter
> > >> in the right place and make the all 1s state the missing one.
> > >> --
> > >> These are my opinions, not necessarily my employers.  I hate spam.
>
> --
>
> Rick Collins
>
> rick.collins@XYarius.com
>
> removed.
>
> Arius - A Signal Processing Solutions Company
> Specializing in DSP and FPGA design
>
> Arius
> 4 King Ave
> Frederick, MD 21701-3110
> 301-682-7772 Voice
> 301-682-7666 FAX
>
> Internet URL http://www.arius.com

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com  or http://www.fpga-guru.com


Article: 23703
Subject: Re: Powering XCV300
Date: Wed, 05 Jul 2000 23:44:21 GMT
Links: << >>  << T >>  << A >>
Micrel, TI and others now have real low drop out (400mV) linear regulators.
They're designed to regulate the 3.3V supply down to 2.5, 1.8 or 1.3V for
the new DSP's and FPGA.

--
Pete Dudley

Arroyo Grande Systems

"Ben" <ejhong@future.co.kr> wrote in message
news:JKf75.1279\$HC5.19157@news2.bora.net...
> I built up a PCI card with Virtex, but the card doesn't seem to show some
> robustness in operation.
> When I put the card under a massive test operation, the card often goes
into
> failure after several hours of working, and sometimes after serveral
> minutes.
>
> From some experiments with a controllable power supply, I came to have a
> mere conjecture that the 2.5V power supply from a linear regulator(LT1076)
> is not really tracking fast enough to meet the changes in Virtex' current
> consumption. When the 2.5V power is supplied from power supply, the card
> kept working over night.
>
> I hope I can get a tip on building up a stable 2.5V for Virtex from 3.3V
> power supply. I used to use 5V power supply for the card, but the 5V power
> supply has less current capacity than 3.3V power supply of my system, so I
> think I need to change it. Do you know a 3.3V-to-2.5V low drop out
regulator
> that works well with Virtex300?
>
>


Article: 23704
Subject: Re: Virtex DLL deskew of board clock with a clock/2
From: Utku Ozcan <ozcan@netas.com.tr>
Date: Thu, 06 Jul 2000 03:20:14 +0300
Links: << >>  << T >>  << A >>
korthner@my-deja.com wrote:

> Hi, Utku.
>
> I may have an idea for you.  If it's a really bad one, then I hope
> somebody else will shoot me down before you waste a lot of time trying
> it.
>
> First, I have a question, though. If your input clock is a frequency
> 'f', and your output clock to all your chips are at frequency 'f', why
> would you have a restriction that the feedback clock be at frequency
> 'f/2'?

No, the output clock fed to FPGAs has the frequency of "f/2", not "f".

I'm just looking at your solution.

Utku

--
I feel better than James Brown.


Article: 23705
Subject: Re: Viewlogic schematic from Synplify edif output?
From: Ray Andraka <ray@andraka.com>
Date: Thu, 06 Jul 2000 00:30:09 GMT
Links: << >>  << T >>  << A >>
Eh,

Andy Peters wrote:

> Rickman wrote in message <395ED46E.D072C66A@yahoo.com>...
>
> >Now if I can just get them to let me enter a single equation for the LUT
> >instead of having to calculate the hex contents myself.
>
> FPGA Editor?
>
> --
> -----------------------------------------
> Andy Peters
> Sr Electrical Engineer
> National Optical Astronomy Observatories
> 950 N Cherry Ave
> Tucson, AZ 85719
> apeters (at) noao \dot\ edu
>
> "A sufficiently advanced technology is indistinguishable from magic"
>      --Arthur C. Clarke

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com  or http://www.fpga-guru.com


Article: 23706
Subject: Re: VHDL code for LFSR
From: Ray Andraka <ray@andraka.com>
Date: Thu, 06 Jul 2000 00:32:49 GMT
Links: << >>  << T >>  << A >>
Nope, that is ok.  feedback is a variable, not a signal so it takes its
assignment immediately.  Since it is before the register assignments it is
not a flip-flop.  If it were after, then it would infer a flip-flop.

Rickman wrote:

> I was reviewing the LFSR Testbench by Jean Nicolle and I am not sure,
> but I thought there might be a bug in some of the VHDL code it
> generates.
>
> The program seems to be very flexible. It allows you to design any of
> several forms of LFSRs and will show you a schematic of the design
> generated as well as code in AHDL, VHDL and Verilog. VHDL is the only
> one I am familiar with and I thought there might be a bug in the clocked
> process. The variable "feedback" is assigned a value inside the if
> clock'EVENT statement which seems to me will generate an additional,
> unwanted FF. The only FFs in the design should be LFSR(0:3). Am I right
> or is this code fine?
>
> You can find the program at http://www.jps.net/kyunghi/LFSR/. Check it
> out!
>
> entity LFSR4_9 is
>   port(clock: in bit;
>        q    : out bit_vector(3 downto 0));
> end;
>
> architecture RTL of LFSR4_9 is
>   signal LFSR: bit_vector(3 downto 0);
> begin
>   process(clock)
>     variable feedback: bit;
>   begin
>     if clock'EVENT and clock='1' then
>       feedback := LFSR(3);
>       LFSR(0) <= feedback;
>       LFSR(1) <= LFSR(0) xor feedback;
>       LFSR(2) <= LFSR(1);
>       LFSR(3) <= LFSR(2);
>     end if;
>     q <= LFSR;
>   end process;
> end;
>
> --
>
> Rick Collins
>
> rick.collins@XYarius.com
>
> removed.
>
> Arius - A Signal Processing Solutions Company
> Specializing in DSP and FPGA design
>
> Arius
> 4 King Ave
> Frederick, MD 21701-3110
> 301-682-7772 Voice
> 301-682-7666 FAX
>
> Internet URL http://www.arius.com

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com  or http://www.fpga-guru.com


Article: 23707
Subject: Re: FFT/IFFT for FPGA
From: Ray Andraka <ray@andraka.com>
Date: Thu, 06 Jul 2000 00:46:37 GMT
Links: << >>  << T >>  << A >>
What size FFT?  The only reason it couldn't be used in Spartan II is if it
uses too many CLBs or Block RAMs (which I think may be the case with the
xilinx free cores).

simonray@hotmail.com wrote:

> Anybody know  exsits library/IP  implement FFT/IFFT with  FPGA?
>
> The Xilinx's FFT/IFFT  IP core only support Virtex,  any solution support
> Spartan II?
>
> The altera's FFT core not free, any suggestion?
>
> Thanks a lot!
>
> Simon
>
> Sent via Deja.com http://www.deja.com/

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com  or http://www.fpga-guru.com


Article: 23708
Subject: Re: BIST in FPGAs?
From: Ray Andraka <ray@andraka.com>
Date: Thu, 06 Jul 2000 01:06:30 GMT
Links: << >>  << T >>  << A >>
Just a few points.  It is much easier to get good test coverage by using
reconfiguration to check the IO, external memories and what not.   My paper
from MAPLD'98 goes into some detail on this test methodology  (Available on my
website).    In most applications you can get away with just checking the IO
on power up.

For applications that are mission critical, it may be easier to periodically
send a set of test data (works well in pipelined signal processors) for which
you know the correct answer.  If it doesn't match, there's a problem.  When a
problem is found, try a reconfiguration first...that might clear up the
problem, then put in special reconfigurations to isolate the problem if it
persists.  For other systems with less well defined data flow, the testing can
be a little more difficult, but is usually not impossible.  This is
essentially the methodology that has been used in the majority of military
systems I've dealt with over the years, and reconfiguration makes that job a
whole lot easier.

Peter Alfke wrote:

> Looks to me like a strong argument for SRAM-based FPGAs, where such issues
> can be resolved by re-configuration, and the user-design need not be
> burdened with BIST, because everything can be pre-tested in a separate
> configuration.
>
> Peter Alfke, Xilinx Applications
> ===============================
> Bill Lenihan wrote:
>
> > We have an FPGA design that will be targeting an Actel 1200 series FPGA
> > (antifuse, one-time-programmable). It will be coded in Verilog,
> > simulated w/ Model Tech's ModelSim PE simulator (PC Win 95/NT),
> > synthesized in Synopsys FPGA Compiler II (Unix), and P&R done w/ Actel's
> > backend tools (Unix).
> >
> > The systems people are making serious noise about requiring this design
> > to have Built-In Self-Test (yes, we know about the gate & speed penalty
> > we pay for this, and that it may be bigger for FPGAs than it is for
> > ASICs because of the granularity difference), meaning:
> >
> > (1) the mission-logic registers must be turned into scan-able registers
> > (2:1 mux in front of D-input) and assembled into N chains, where N is
> > typically 2 <= N <= 64.
> >
> > plus the following (w/ non-scan-able registers) would need to be
> > stitched into the design:
> >
> > (2.1) LFSR-based pattern generator
> > (2.2) LFSR-based signature analyzer / response compressor
> > (2.3) control logic (wired back w/ hooks to the "CPU bus" or whatever
> > other communications port reports BIST pass/fail status) to do M scan
> > sequences.
> >
> > Has anyone done these things for an FPGA? If so, what tools?
> >
> > I know that the EDA industry has tools that routinely do step (1) for
> > ASICs, but does anyone do this for FPGAs? Can any EDA tool take an EDIF
> > netlist produced by an FPGA synthesis tool, insert scan registers & wire
> > chains [adding ports for the scan in(s), scan out(s), scan enable
> > control(s)], and have the new modified netlist accepted by the FPGA P&R
> > tools?
> >
> > Can any EDA tool automate steps (2.1-2.3), at all, let alone for FPGAs?
> >
> > We are interested in finding out how much, if any, of these tasks are
> > automatically done by EDA tools for FPGAs. Naturally we can build all
> > this testability explicitly into the HDL source code if we have to, but
> > we want to avoid that.
> >
> > Even if we can only do step (1) but not steps (2.1-2.3), we may still be
> > able to do some scan-based test, perhaps with an external
> > microcontroller performing steps (2.1-2.3).
> >
> > --
> > ==============================
> > William Lenihan
> > ==============================

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email ray@andraka.com
http://www.andraka.com  or http://www.fpga-guru.com


Article: 23709
Subject: A diary of a battle: Wild-One, 2.1i and FPGA Express3.3
From: Miloslaw Smyk <thorgal@amiga.com.pl>
Date: Thu, 06 Jul 2000 01:39:10 GMT
Links: << >>  << T >>  << A >>
Hello,

Short version: Annapolis Wild-One-XLA with two XC4085XLAs. Aldec Active-HDL
3.6. Xilinx Foundation 2.1i. Simulation works. Synthesis doesn't. Can anyone
help me with suggestions?

Long version: I'm using the hw/sw combination listed above. Wild-One, apart
from two XC's is equipped with 2MB of RAM fitted on PE1. The board arrived
with extensive docs and VHDL code for both simulation and synthesis,
targeted however at ModelSim and Synplify, neither of which I have access
to. My target system is Debian/GNULinux on x86, but simulation/synthesis is
done under W2K. Active-HDL is patched with ServicePack #1, Foundation and
FPGA Express are also patched with newest SP from Xilinx site (which brought
FPGA Express up to version 3.3).

One more thing, before I begin. I'm a software engineer and stuff I do with
VHDL and hardware is in _big_ part guesswork and intuition. Please bear with
my stupidity, when necessary.

Looking into archives of this group I learned that Wild-One simulation with
Active-HDL is rather difficult, but I actually succeeded, albeit not
effortlessly. First step was to (re)create WF1 shared library, that was
supplied in a format not supported by Active. I converted *.XNF macros to
*.EDFs, imported these to Active, then added files that I guessed should
also find their way into library if it was going to have functionality
listed in the docs. I then proceeded to simulation and with help of just
created library, I was able to compile dpmex example in no time.

Compilation created two top-levels - Behavior entity/architecture and
SystemConfig Configuration. Selecting Behavior as top-level allowed me to
simulate the entire host/card system, however some 1300ns into the
simulation errors appeared (e.g. state machine wandered to "error" states) -
this was because card should be configured with memory for proper
simulation. This configuration is actually performed in SystemConfig, but
when I tried selecting it as top-level, Active promptly crashed. Repeatedly.
I narrowed the problem to code fragment that used "configure" to fit
mezzanine card with static memory, that was GENERATED. I modified it so that
no GENERATE statements were used, but rather the code responsible was
duplicated required number of times (two, fortunately). With this change
applied I was able to select SystemConfig as top-level, but as soon as I
tried to initialize the simulation, Activa crashed again. After some more
head-scratching I hacked nice and flexible Annapolis code to leave
configuration for better times and simply hard-coded all necessary things
from SystemConfig into lower levels of hierarchy. Thus, upon selecting
BEHAVIOR as top-level, this time I had properly configured card ready for
simulation. Which was performed properly and finished with success flag set.

So far so good. I am not going to modify the configuration of my Wild-One in
foreseable future, so hard-coding things in vendor VHDL is fine with me.

Next logical step was to try synthesizing the examples, to see if generated
chip images work equally well as the supplied ones. Foundation 2.1i
unfortunately crashed left and right for me (which I take to be W2K issue,
as it worked before I "upgraded" my w98). I decided to work directly in FPGA
Express. Importing files suggested by the board docs was unsuccessful, with
errors being reported in strange places and had me baffled until I realized
that these were VHDL-93 extensions that FPGAEx barfed on. I checked with
Synopsys website and from their vague language was able to infer that 3.3
actualy did not support VHDL-93 fully (I may be wrong on this, though),
while Synplicity (which was suggested as synthesis tool by Annapolis) did.
With much sweat (I said I was no whiz) I reworked their code to compile.

Compilation yields three important files, which are CPE0, CPE0_Interface and
CPE0_LogicCore (from outer- to innermost, by which I mean that CPE0 defines
chip pins, CPE_LogicCore defines inner logic and CPE0_Interface is well,
interface between the two).

Synthesizing CPE0_LogicCore works. Warnings are issued due to the fact the
the number of pins differs from these defined for target chip (XC4085XLA-07
BGA432 if memory serves me right), but that aside - the chip seems to be ok.
top-level for synthesis. At this point (some 20 hours ago), many complaints
were raised by FPGAEx, many of them really surprising (ok, for me). I fixed
some (not even sure WHY they were considered troublesome), but as it
progressed, more and more problems have appeared, including complaints about
missing .typ files, which (according to the docs) I was supposed to generate
with create_types executable, which however failed to be present in the
installation directory and which I was unable to find single mention of,
despite extensive web search.

As I feel that I am not significantly closer to solution than I was in the
morning and it seems I've ran out of both ideas *and* cola, I decided to ask
here. So if you either:

1. have any experience with WildOne and Foundation/FPGA Express synthesis
2. have ideas as to what may be wrong with my ideas

(thorgal@amiga.com.pl), as the news server I use is somewhat shaky.

Thanks beforehands and best regards,
Milek
--
mailto:thorgal@amiga.com.pl   |  "Man in the Moon and other weird things" -
http://wfmh.org.pl/~thorgal/  |  see it at http://wfmh.org.pl/~thorgal/Moon/


Article: 23710
Subject: PamDC question.
From: mwojko@hartley.newcastle.edu.au (Mathew Wojko)
Date: 6 Jul 2000 01:58:41 GMT
Links: << >>  << T >>  << A >>
Hi,

For those who have successfully used PamDC for Compac's (formerly
Digital) Pammette, I have a question.

Has anyone successfully implemented carry ripple adders (using the
fast carry logic) on the XC4000 series of devices? I know that you have
to be very specific with the code that you write - by specifying and
mapping the carry functions and signals to specific parts of the CLB.
At the moment, I just cannot get it work and have spent a fair
amount of time on it.

So, does anyone have any PamDC source examples that they are willing
to post to expose exactly how carry ripple adders are specified using
PamDC. I would very much appreciate it!

Thankyou,
Mathew


Article: 23711
Subject: Re: Graphic LCD controller design
From: steve (Steve Rencontre)
Date: Thu, 6 Jul 2000 03:00 +0100 (BST)
Links: << >>  << T >>  << A >>
In article <0fj2mskk8dimfpovuukk2dmkjknj8nqhuh@4ax.com>,
peter.elliot@ukonline.co.uk (Peter Elliot) wrote:

> Hi,
>
> Sorry if this has been covered before....I did a search but didn't
> come up with much other than an old article in Circuit Cellar.
>
> I'd like to interface a small graphic lcd panel (upto 240x128 - mainly
> 128x64) to a Xilinx FPGA. The design simply needs to display a bitmap
> stored in SRAM and allow simultaneous access to the SRAM from the CPU.
>
> Any pointers would be appreciated.

It's just a bunch of counters and shift registers. When I did it some
years back, I used video DRAM, which does the dual-porting and
serialisation for you, but FPGAs are much bigger, faster and cheaper now.
Most of the complexity in my design was down to the mark-space modulation
technique I used for greyscale.

Get data sheets for devices from several different manufacturers. I found
that none of them tended to explain things very clearly if you didn't
already know what they meant, but although they're not plug-compatible,
they're all broadly similar. By cross-referencing a few differently
incomplete explanations of the same thing, it all fell into place. You
only need relatively minor variations to drive most of the displays
around.

--
Steve Rencontre		http://www.rsn-tech.demon.co.uk
//#include <disclaimer.h>


Article: 23712
Subject: ORCA4 (was Re: Altera Ships Largest PLD)
From: John McCluskey <john_mccluskey@hotmail.com>
Date: Wed, 05 Jul 2000 22:23:31 -0400
Links: << >>  << T >>  << A >>
Don,

Please don't give up on us just yet!   :-)

I've been playing with the alpha version of Foundry 2000  (otherwise known as
Foundry 9.5)
and have been targeting the new Series 4 devices.    I haven't yet tried the new
block
rams, but have gotten the following results in a 4E2 device (nominal gate count
200K).
This is the smallest device in the new family.

A 512x8 FIFO built with *distributed* ram has a 156 MHz write clock, and 135 MHz
read clock.   The monolithic block ram FIFO ought to be a lot faster than this...

A CRC32 generator with a 256 Bit parallel input runs at 156 MHz  (OC768, dontcha
know)

A 32 bit NCO with an equivalent sine table of 1024x16 runs at 225 MHz.  (this
design has nets
with a reported fanout of 180 with delays of under 3 ns.  It's incredible).

32 bit counters/adders/accumulators run around 205 MHz.

The routing is far, far, far, way amazingly faster than series 3.   The LUT6 is
back.   The PFU
has 2 clock inputs, 1 for each nibble.   Ditto for the clock enables and lsr
inputs.

The microprocessor interface is much bigger and faster,
with 8, 16, or 32 bit data busses.    The registers available through the processor
interface are
far more comprehensive, thanks to the on-board AMBA bus.   For example, not only
can you
program the device through the uP interface, you can also read and write all the
block rams, as
well as program the registers in all the PLL's.   There are 8 PLL's, at least 2 of
which will run at 416
MHz.   The IO supports all the fancy stuff (LVDS, SSTL3, PECL, etc..)  There are
shift registers
built into the IO cells.

On the other hand, some stuff is gone.    The Intel 960 uP interface mode is gone.
(PowerPC only, now).
The clock controllers are gone, mostly because they aren't needed.   You can have
multiple edge clocks,
fed by any input along the edge.   And speaking of clocks, the clock trees are
amazing...  The router uses
some sort of heuristic to detect the nets which are clocks, and then builds
balanced clock trees.   When you
look at the routing in EPIC, it looks like a god-damned ASIC clock tree!   Max
clock skew is typically about 0.3 ns.
The PLL's can also be used to give zero delay for clock distribution.   This isn't
very high to start with, and is usually under 3 ns, worst case.

The beta version of Foundry 2000 (Foundry 9.5) is due out in mid-July.  Ask for

regards,

John McCluskey
Lucent Microelectronics

Don Husby wrote:

> Rickman <spamgoeshere4@yahoo.com> wrote:
> > If this is what I think it is, you have the same capability on the OR3T
> > family. They have the MPI, a built in interface for two uPs. "The MPI is
> > programmable to operate with PowerPC MPC800 series microprocessors and
> > Intel*i960* J core processors".
>
> Yeah, except that the 3T is limited to an 8-bit data bus.  With a high
> performance 32-bit CPU interface, it makes it feasible to interface the
> OR4T to your 32-bit DSP with minimum (hopefully 0) logic.
>
> > I never used the interface since it would have required me to emulate
> > the same interface and I am using a DSP.
> >
> > The new chips sound like they will continue to give Xilinx a run for the
> > money in the large telecom accounts. But there will not be much for the
> > small guys like us. (I guess I should speak for myself!)
>
> Historically, I think the price/performance of Orcas has been better
> than Xilinx (ie OR2 vs X4K/spartan), especially if you have high pin/logic
> ratio, or if you push them to the performance edge and diddle the mapping
> and placement.
>
> It looks like Xilinx is doing better with this generation, probably because
> Altera has released their own "Spartan" series.  (Although I may learn
> otherwise when I port this design to Xilinx.)
>
> > The bottom line is that Xilinx has a much larger volume (I assume) than
> > Lucent and will continue to support a broad range of parts in different
> > sizes, packages (although not as many combos as I would like) and new
> > technologies. And on top of it, they seem to be committed to keeping a
> > low priced line available.
>
> Plus they seem to have a lot more (and better?) software support behind the
> chips.  I especailly like the fact that Xilinx appears to allow exact mapping
> (e.g. put this signal on this CLB pin dammit!) to be specified in VHDL code.
>    Lucent seems to have a long way to go with the OR3T PAR software.  For
> example, it won't swap pins on a simple 8-bit D-register or tristate bus
> in order to meet timing.  I ended up doing this myself by teasing the VHDL
> code, but it takes 1-2 hours of effort to get something like that right, and it
> will probably break when the next software version is released.
>
> --
> Don Husby <husby@fnal.gov>             http://www-ese.fnal.gov/people/husby
> Fermi National Accelerator Lab                          Phone: 630-840-3668
> Batavia, IL 60510                                         Fax: 630-840-5406


Article: 23713
Subject: Re: Virtex-E PCI (MB with 3.3Vsignaling)
From: Rickman <spamgoeshere4@yahoo.com>
Date: Wed, 05 Jul 2000 23:07:44 -0400
Links: << >>  << T >>  << A >>
Rick Filipkiewicz wrote:
>
> Tobias-Dirk Stumber wrote:
>
> > Hi !
> >
> > Since Virtex-E does not allow to be used in
> > PCI systems with 5V signaling level, I search
> > for (cheap) motherboards with 32bit/33MHz
> > PCI slots that have 3.3V signaling level.
> > (We don need more bandwidth and want to
> > use Virtex-E because its cheaper than our
> > currently used Virtex.)
> >
> > Perhaps there are none and only 66MHz PCI
> > systems use (require!) 3.3V signaling. Any
> > (cheap) motherboards that supply this ?
> >
> > Thanks,
> > Tobias
>
> It would give you wider access to cheap motherboard if you buffer you
> PCI through QuickSwitch type parts. These, originally supplied by
> Quality Semiconductor - now part of IDT, are really just a bunch of pass
> transistors that clamp the output voltage to about Vcc-0.7. The trick is
> to set down from the nominal 5V to 3.9, they take almost no power so a
> simple Zener is sufficient. We have solved the 5V/3.3V PCI problem in
> this way for a long time as do a lot of [older] motherboards.
>
> Other suppliers of these type of parts are Pericom & Phillips.

Doesn't this violate the PCI spec in that although you only have one set
of pins literally on the bus, this part acts as a wire which gives you
*three* sets of pins electrically on the bus, QuickSwitch in,
QuickSwitch out and PCI chip in. I expect the added capacitance and stub
length would screw up a PCI bus and not let you reach the maximum board
count and/or speed.

--

Rick Collins

rick.collins@XYarius.com

removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com

Article: 23714
Subject: Re: Viewlogic schematic from Synplify edif output?
From: Rickman <spamgoeshere4@yahoo.com>
Date: Wed, 05 Jul 2000 23:27:39 -0400
Links: << >>  << T >>  << A >>
That was where this started. The FMAP/gate thing is rather clumsy. A
long time ago when Xilinx design was done in Viewlogic under DOS, I came
up with a schematic symbol which included the gates for a mux along with
an FMAP to which you could pass a parameter. The parameter string would
be broken up into separate bits which were assigned to the wires into
the 16 mux inputs. Of course the four control inputs were used as the
select inputs to the mux. This gave you a way to define 4 input LUTs and
map them.

The only problem was that although I liked being able to use a single
symbol with a parameter, generating the string of bits was not
convenient. It would be nicer if I could have input a logic equation
with the signal names (or maybe just A, B, C, D). I tried to talk to
this. Similar to trying to push everyone away from schematics to HDLs.
They know where they want you to work so that their development and
support job is easier, I guess.

But the LUT thing stopped working when Viewlogic converted to Windows. I
also did not use the Xilinx software for quite a while after that.

But I still long for the days of being able to place LUTs on a schematic
and not having to think about how the mapper would partition and place
my logic. I am happy to let the software route a design, but I like
controlling the partioning (totally) and mapping (to an extent). This is
my big problem with HDLs (along with logic generation).

Ray Andraka wrote:
>
> Eh,
>
> How about FMAPs around gates?
>
> Andy Peters wrote:
>
> > Rickman wrote in message <395ED46E.D072C66A@yahoo.com>...
> >
> > >Now if I can just get them to let me enter a single equation for the LUT
> > >instead of having to calculate the hex contents myself.
> >
> > FPGA Editor?

Of course the FPGA Editor is way too clumsy. I want all my inputs on the
schematic or in the preference file.

But I am sure that at some point soon, I will overcome my aversion to
VHDL and start coding my FPGAs like most of the other designers.

--

Rick Collins

rick.collins@XYarius.com

removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com

Article: 23715
Subject: Re: BIST in FPGAs?
From: Rickman <spamgoeshere4@yahoo.com>
Date: Wed, 05 Jul 2000 23:38:49 -0400
Links: << >>  << T >>  << A >>
Not trying to nitpic, but this is the standard test coverage problem and
can be very tough to solve if you don't design for test. I had read a
lot of papers a long time ago (this is not a new problem at all) and
they listed many, many problems with trying to get close to 100%
coverage. Many of the standard design techniques we use in FPGA design
help a lot; synchronous design is a big one! But it can be a real bear
coming up with the test vectors to get the high coverage.

A test problem example is the async reset. You can verify that the FF is
reset/set after you run the test, but how do you verfiy that it was not
before? Did the reset really do anything?

If you have to test more than one design in a given chip, I would be
willing to bet it is easier to get an NDI from Xilinx and do the full
coverage test of the entire chip with unique bit files. Generating test
vectors has to be done for each design. Generating a full test of the
chip only needs to be done once for each chip type (with a few mods for
variable IO assignments).

After all, the standard test vector method does not take advantage of
the reconfigurability of the chip while the entire chip test does.

Ray Andraka wrote:
>
> Just a few points.  It is much easier to get good test coverage by using
> reconfiguration to check the IO, external memories and what not.   My paper
> from MAPLD'98 goes into some detail on this test methodology  (Available on my
> website).    In most applications you can get away with just checking the IO
> on power up.
>
>  For applications that are mission critical, it may be easier to periodically
> send a set of test data (works well in pipelined signal processors) for which
> you know the correct answer.  If it doesn't match, there's a problem.  When a
> problem is found, try a reconfiguration first...that might clear up the
> problem, then put in special reconfigurations to isolate the problem if it
> persists.  For other systems with less well defined data flow, the testing can
> be a little more difficult, but is usually not impossible.  This is
> essentially the methodology that has been used in the majority of military
> systems I've dealt with over the years, and reconfiguration makes that job a
> whole lot easier.

--

Rick Collins

rick.collins@XYarius.com

removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com

Article: 23716
Subject: Re: ORCA4 (was Re: Altera Ships Largest PLD)
From: Rickman <spamgoeshere4@yahoo.com>
Date: Thu, 06 Jul 2000 00:01:09 -0400
Links: << >>  << T >>  << A >>
From what was I read in the press release, these chips will be out in Q3
which is about the same as the Spartan II chips and in time for my next
board design. I would like to ask for more detail on the chips, the
schedule (specifics on which chip will be out when) and most
importantly, the target prices. I would ask my disti, but it takes two
weeks or more to get a normal quote on chips that I am already buying. I
can imagine that I will only get blank stares when I ask about chips
that aren't even out yet.

Any chance we can get more details on the new OR4 family? I would really
like to use one of them on the new board.

One BIG question. Will the IOs still be 5 volt tolerant and will the
chips still come in any of the small packages? The OR3 line doesn't come
in anything smaller than a 256 pin BGA. Until someone comes up with
software to take advantage of partial reconfiguration, I need to use
multiple 100 pin V/TQFP 100 packages. I just don't have the room for
anything bigger (a CS144 will do). I have to have multiple packages as
they get loaded with different designs depending on what daughter boards
are plugged in. Heck, I could use a TQFP 64 if someone made one with
enough logic in it! Your OR4E2 would do nicely!

John McCluskey wrote:
>
> Don,
>
> Please don't give up on us just yet!   :-)
>
> I've been playing with the alpha version of Foundry 2000  (otherwise known as
> Foundry 9.5)
> and have been targeting the new Series 4 devices.    I haven't yet tried the new
> block
> rams, but have gotten the following results in a 4E2 device (nominal gate count
> 200K).
> This is the smallest device in the new family.
...sniped a lot of interesting details...
> On the other hand, some stuff is gone.    The Intel 960 uP interface mode is gone.
> (PowerPC only, now).
> The clock controllers are gone, mostly because they aren't needed.   You can have
> multiple edge clocks,

I won't miss the clock controllers. I never did get the hang of them,
very complicated.

> fed by any input along the edge.   And speaking of clocks, the clock trees are
> amazing...  The router uses
> some sort of heuristic to detect the nets which are clocks, and then builds
> balanced clock trees.   When you
> look at the routing in EPIC, it looks like a god-damned ASIC clock tree!   Max
> clock skew is typically about 0.3 ns.
> The PLL's can also be used to give zero delay for clock distribution.   This isn't
> very high to start with, and is usually under 3 ns, worst case.
>
> The beta version of Foundry 2000 (Foundry 9.5) is due out in mid-July.  Ask for

> > Yeah, except that the 3T is limited to an 8-bit data bus.  With a high
> > performance 32-bit CPU interface, it makes it feasible to interface the
> > OR4T to your 32-bit DSP with minimum (hopefully 0) logic.

Actually, I do that now. I only use a single quickswitch part (0 nS
delay) as a decoder to use the OR3T chip with the TMS320C31, loading it
in Async Peripheral mode. This then becomes my DSP interface during
normal operation. With the new TMS320VC33 chip, I can drop the address
decoder.

> > Historically, I think the price/performance of Orcas has been better
> > than Xilinx (ie OR2 vs X4K/spartan), especially if you have high pin/logic
> > ratio, or if you push them to the performance edge and diddle the mapping
> > and placement.

I have only gotten competitive pricing when I beat them up with a
salesman beating with me (thanks Rick G.). But the high IO count was a
godsend. I actually would have had to go to a *much* larger (and more
expensive) package with the Xilinx parts as their IO count is much
lower. I wonder if this advantage will remain with the OR4 chips?

> > Plus they seem to have a lot more (and better?) software support behind the
> > chips.  I especailly like the fact that Xilinx appears to allow exact mapping
> > (e.g. put this signal on this CLB pin dammit!) to be specified in VHDL code.
> >    Lucent seems to have a long way to go with the OR3T PAR software.  For
> > example, it won't swap pins on a simple 8-bit D-register or tristate bus
> > in order to meet timing.  I ended up doing this myself by teasing the VHDL
> > code, but it takes 1-2 hours of effort to get something like that right, and it
> > will probably break when the next software version is released.

Yes, I think the Xilinx software is more mature. But then, Lucent is not
trying to wean anyone off of schematics either! I like my Viewlogic!!!

> > Don Husby <husby@fnal.gov>             http://www-ese.fnal.gov/people/husby
> > Fermi National Accelerator Lab                          Phone: 630-840-3668
> > Batavia, IL 60510                                         Fax: 630-840-5406

--

Rick Collins

rick.collins@XYarius.com

removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com

Article: 23717
Subject: Re: ORCA4 (was Re: Altera Ships Largest PLD)
From: fliptron@netcom.com (Philip Freidin)
Date: 6 Jul 2000 07:38:42 GMT
Links: << >>  << T >>  << A >>
Rickman  <spamgoeshere4@yahoo.com> wrote:
>
>Yes, I think the Xilinx software is more mature. But then, Lucent is not
>trying to wean anyone off of schematics either! I like my Viewlogic!!!
>

They support Viewlogic (Innoveda) !?!?!?!  Wow, I'm changing now.

Philip


Article: 23718
Subject: Re: VHDL code for LFSR
From: "Vikram Pasham" <vikram.pasham@xilinx.com>
Date: Thu, 6 Jul 2000 00:49:36 -0700
Links: << >>  << T >>  << A >>
Some more info on variable and signal assignment under the control of clock edge with different scenarios......

1. A variable is used before its assignment.

if (CLK = '1' and CLK'EVENT) then

A <= B;

B := C;

end if;

Variable "B" infers a flop as it is read before it is assigned

2. A variable is used after its assignement

if (CLK = '1' and CLK'EVENT) then

B := C;

A <= B;

end if;

In this case, the variable does not infer a flip flop.

3. Signal is used after it is assigned

if (CLK = '1' and CLK'EVENT) then

A <= B;

C := A;

end if;

A flip-flop is inferred for signal "A". However, in keeping with the simulation semantics of a signal, the use of the signal in the second assignment statement is not the same as the value assigned in the first statement (since the assignment occurs after
a delta delay).
The difference between signals and variables is more evident during simulation than during synthesis.

Hope this helps........

Vikram                         <br>
Xilinx Apps

Article: 23719
Subject: Re: VHDL code for LFSR
From: "Vikram Pasham" <vikram.pasham@xilinx.com>
Date: Thu, 6 Jul 2000 00:52:41 -0700
Links: << >>  << T >>  << A >>
Some more info on variable and signal assignment under the control of clock edge with different scenarios......

1. A variable is used before its assignment.

if (CLK = '1' and CLK'EVENT) then
A <= B;
B := C;
end if;

Variable "B" infers a flop as it is read before it is assigned

2. A variable is used after its assignement

if (CLK = '1' and CLK'EVENT) then
B := C;
A <= B;
end if;

In this case, the variable does not infer a flip flop.

3. Signal is used after it is assigned

if (CLK = '1' and CLK'EVENT) then
A <= B;
C := A;
end if;

A flip-flop is inferred for signal "A". However, in keeping with the simulation semantics of a signal, the use of the signal in the second assignment statement is not the same as the value assigned in the first statement (since the assignment occurs after
a delta delay).
The difference between signals and variables is more evident during simulation than during synthesis.

Hope this helps........

Vikram
Xilinx Apps

Article: 23720
Subject: Re: Virtex Global Set Reset
From: David Gilchrist <david.gilchrist@NOSPAM.com>
Date: Thu, 06 Jul 2000 10:24:17 +0100
Links: << >>  << T >>  << A >>
SteVe wrote:
>
> In my Virtex design (written in VHDL) I have a reset signal (N_RST).
> This signal is active low and resets all the FFs of the design,
> excluding a few of them (which haven't a set/reset signal).
> My question is: can I use the Global Set Reset resource? The problems I
> see are two:
> - the GSR net is active high, but my N_RST signal is active low;
> - the GSR net reaches all the FFs: what happens to the FFs of my design
> that have not a S/R signal?
>
> Thanks,
> SteVe

In the design I am currently working on I looked into using the GSR
resource and inquired about timing on this net wrt global skew on this
recommend that the GSR is used at all and that the reset should use the
normal routing resources.  This of course begs the question, why is it
there at all if it's not recommended to be used?

Cheers

David

--
To reply replace NOSPAM with BAESYSTEMS

__________________________________

David Gilchrist
Developement Engineer
BAE SYSTEMS

Article: 23721
Subject: XILINX configuration
From: Kai Schulze <schulzek@ee.nec.de>
Date: Thu, 06 Jul 2000 11:58:44 +0200
Links: << >>  << T >>  << A >>
Hi,

has anybody an idea how to configure a XILINX XC40250XV with ATMEL
PLCC20 PROM's?
If anybody can provide me the wiring of the XILINX and the ATMELS, I
would be very thankful.

Kai


Article: 23722
Subject: Before and after configuration, are the undefined I/O ports input or output?
From: wenger <wengerNOweSPAM@zsmc.com.cn.invalid>
Date: Thu, 06 Jul 2000 03:15:33 -0700
Links: << >>  << T >>  << A >>
    Before and after configuration, are the undefined I/O
ports of Xilinx CPLD XV95288 and Xilinx FPGA XCV800 inputs
or outputs? For the Xilinx FPGA XCV800 , we use the default
mode of M0,M1,M2 , Slave-serial mode.

* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful

Article: 23723
Subject: Using LUTs in Virtex with ViewDraw and ViewSim
From: fliptron@netcom.com (Philip Freidin)
Date: 6 Jul 2000 10:43:11 GMT
Links: << >>  << T >>  << A >>
Sometimes the issues are complex.
Sometimes the complexity of a CAE problem is exacerbated by multiple vendors.
Sometimes the documentation sucks, or just doesn't exist.
Sometimes the dependencies are undocumented or obscure.
Sometimes the expected results of a workaround are disappointing.

... and then there are LUTs in the netlist.

This long winded post is of interest to you only if you use most of :

ViewSim
ViewDraw
M2.1i / M3.1i
Virtex (and Spartan II probably)
Synthesis

Buried in this message is info on simulating Virtex LUTs from
deconstructed synthesis, as well as how to create designs with equations
for both simulation and generating a netlist that PAR processes the way
you expect. Unlike my normal posts, this one is almost attitude free.

The original request for help on the original thread was about the
challenge of simulating with ViewSim, a Virtex design that included some
schematic and synthesis generated logic.

>Viewlogic schematic from Synplify edif output?
>
>Hello All, I use a mixed design entry style where I do most of the work in
>Viewlogic schematics but through in a few key VHDL behaviors. After I am
>satisfied with my simulation I synthesize those VHDL blocks using Synplify.
>What I would like to do is convert the synthesized netlists into schematics
>for viewing. In the past we used Exemplar for synthesis and we could run the
>net lists through edifneti and viewgen to get a machine generated schematic.
>Now we use Synplicity which puts LUT4, LUT3 and LUT2 primitives into the
>netlist. These components are from the virtex library and do not have
>simulation models. I've found that Xilinx provides two utilities, ngdbuild
>and ngd2edif, that can be used to produce a simulatable edif netlist. The
>resulting netlist from this path contains x_lut4, x_lut3 and x_lut2 cells
>from the SIMPRIMS library. Unfortunately these cells do not show up in the
>SIMPRIMS library so I cannot generate the schematic. Can anyone tell me how
>to generate a Viewlogic schematic from Synplify edifoutput? Thanks for any
>Pete Dudley

There were some off target answers that unfortunately mixed up the LUT
primitives with FMAP primitives. The thread survived anyway.

>Simon:
>Nope.  The LUTs have attributes attached which correspond to the LUT SRAM
>contents and thus define the LUT logic.

>That would be useful as a schematic primative. I have always preferred
>to think in terms of LUTs. At one time back when Viewlogic was as DOS
>program, they supported a way of passing parameters into a lower module.
>I had defined a module which was in essense the 4 input LUT complete
>with the programmable configuration bit. The parameter defined the bit
>pattern and I had a 4 input LUT that would place and simulate. But for
>some reason when they converted to Windows, these modules ceased to
>function.

The problem was that the netlister for Viewdraw was changed, and the
developer for the replacement was uninterested in supporting parameter
substitution, given how few users were using it. Also, the developer
"just didn't get it". (it is also quite hard to get it right. It took
several years to get WIR2XNF to do this correctly)

>If they don't have a schematic symbol for the simulation primatives, can
>you add them to the library? Or are the libraries uneditable?

The libraries are editable, but you can't make simulation primitives out
of thin air. Only Viewlogic can do that. A bunch of people petitioned
Xilinx and Viewlogic to fix this problem with the basic simulation of
Virtex schematics about 2 years ago, and the result is that new libraries
were created, and updates were made to ViewSim and the EDIF netlister. See
below for details of where to get it. As a side effect of this begging
procedure, we also got them to add suport for simulating ROMs without
having to use the LOADM command. The new library includes support for
simulation of ROMs and initialized RAMs. And the initialization (which are
attributes on the symbols on your schematics) also are (almost) passed
through to PAR, and generate the right stuff too. Could life be any better

>This is the type of problem that make people want open source tools.
>They get tired of being told how to design and what tools to use.

Oh so true, but then we wouldn't get any work done at all, and we'd be
the CAE vendors everyone would be complaining about.

So let's get back to the original problem.

The problem is that Viewsim is a gate level simulator, and so can't
simulate the synthesis section of a design. (Innoveda does have a product
called Digital Fusion, that allows simulation of any mix and nesting of
have ViewSim.)

I then tried a brief, on topic answer but strayed into schematic-only stuff:

>Philip Freidin
>libraries are supposed to be able to simulate ROMs with init=xxxx
>attributes attached. This was fixed about 6 months ago. No announcement
>
>If you go into the most current Viewlogic Viewdraw Virtex library,
>and look at the LUT4 symbol, you will see a default @init="0000"
>
>Push down and you will find a simulation model to match.
>
>There is probably a way to get from where you are to where you want to be,
>given the above info. I haven't done it, and I suspect it will take a bit
>of screwing around to get it right. For instance, I had a play with this
>stuff a few months ago, and to get the init to work, and set the value to
>8000, I had to overide the default to @init=8000 and attach another
>attribute init=8000 to both get it to simulate and to generate the chip I
>wanted.

More on this soon.

>I believe this is a bug in the library definition, but I am burntout
>with trying to get Xilinx to care about the quality of this stuff.

Obviously not burnt out enough. 5 hours of screwing around and now I feel
like posting again. The library definition is sort of OK, what is missing
is sufficient documentation to make it work. I believe this can't have
been tested at Xilinx, because the problem is obvious. If you use the LUT
symbol as supplied, you can't get through the ngdbuild without an error.
This is relevant for a schematic-only design, that uses LUTs.

>At least they eventually added support for Virtex for Viewdraw/Viewsim
>users.  (this init stuff also should work with the SRL16
>and block RAMS too)

plus a followup:

>http://support.xilinx.com/techdocs/5968.htm

Rick Collins showed appreciation:

>Thanks a lot. This can save a lot of trouble when you want to closely
>control the mapping and placement of a function. The old way of using
>gates and a FMAP was just so clunky. I will never understand why Xilinx
>always wants to tell its users how they should do design.

Well, actually, I rather liked it, since I still got to specify my logic
exactly the way I wanted it (with gates), and the FMAPs lets me directly
control the clustering.

>Now if I can just get them to let me enter a single equation for the LUT
>instead of having to calculate the hex contents myself.

Oooooh, just you wait .....

Simon steps up to the plate:

>You could add the equation as an attribute, then write a small
>program to scan the EDIF and convert the equation to an INIT.
>This would still leave the simulation to be dealt with...

Which in fact is what Don Husby (Our token Lucent user) did back
in November 1994, and made it available . Written in awk.

Rick Collins:

>Or am I missing something. Does the simulation work on something other
>than the EDIF file?

In the schematic only flow there are several issues:
1) Can I just run VSM on my design and then run ViewSim on the result,
without running any Xilinx tools.
2) Can I take the same schematic database and generate a netlist for the
Xilinx tool chain, that will build the same design

Don's program post processed a netlist that had EQN and ROM symbols in it
and created a new netlist that had the EQN/ROMs replaced with gates. This
was then acceptable to the P&R tools of the time (pre Mx.1i stuff). So it
sort of supported issue 2, but not issue 1.

If we then look at the original problem from Pete, we can add:
3) If all I have is ViewSim, and I am also doing VHDL/Verilog, how do
I simulate it. I can get a netlist out of the xilinx tools that has
merged the HDL and schematic together, but it has these darn LUT symbols
in it. How do I get ViewSim to do something with this.

Well it looks like I almost gave Pete an answer, by pointing him at the
new update for ViewSim, and the updated libraries from Xilinx, and the
super secret clue about there being both the @INIT and the INIT
attribute. Clearly he had to do some more work, but it would seem at
least his problem is now solved. The EDIF netlist that comes out of the
Xilinx tools has LUTs in it, and these LUTs have INIT attributes on them.
The last piece of the puzzle from him was discovering the EDIFATTS.CFG
file that can translate the INIT=xxxx into @INIT=xxxx that VSM/ViewSim
are going to need.

Here is his praise, and additional info.

>Phil,
>
>As usual, you hit the nail right on the head. Maybe I'm oldfashioned but I
>like to see the results of each synthesized block. Now I can get my post
>synthesis schematic into simulatable form. To do it I had to use the updated
>Virtex library from Xilinx and update the Viewlogic Fusion simulator and
>utility but it was ten's of megabytes.
>
>file.
>
>INIT\=@INIT
>

So why you might ask have I wasted your time with this ~340 line posting.
Well I thought it would be neat to explain exactly what was needed to get
Pete's simulation to run.  And then there are the side effects of me
understanding how this all works, and the further work I did today.

You see, I thought I should go back and understand why there was the
kludge of needing both @INIT and INIT on a schematic only design. And the
good news is I now know, and I also figured out how to get the EQN to
work too.

So here is the info:

In the new Virtex library, the LUTs, SRL16s, BRAMS, and ROMs all have
@INIT type attributes on them (some have it visible, some don't). Dozens
of symbols that have been carefully worked on by someone at Xilinx to
make this work. Pity they didn't document this hard work.

The @ character is needed because that is the ONLY way to pass a value
through the hierarchy, to the lower level where the simulation primitives
live. Down at the lower level are the simulation primitives. These have
assignment statements on them like INIT_00=@INIT, which is how the
instance level attributes that you attach to the symbols get propagated
to the simulation primitive, that ViewSim actually simulates.

The VSM program knows how to pass and assign these values.

Although the default attribute on a symbol like LUT4 is @INIT="0000",
you can over ride it with your own. Here is how it is interpreted:

@INIT="1234"     1234
@INIT=1234       1234
@INIT=FF00       FF00
@INIT=FF         FF00
@INIT=F          F000

Only hex specifcations. The MSB of the values is selected when all 4
inputs to the LUT are '1'

But here's a little problem. When you netlist this out to M2.1i/M3.1i,
with the EDIF netlist writer, what goes out to the netlist is the @INIT
attribute, just the way you wrote it. Unfortunately, this is not a valid
attribute for the Mx.1i tools. (By the way, the lowest level that goes
out to the netlist is the LUT4, not the simulation schematic below it,
since it is made up of primitives that are unknown to Xilinx P&R tools.
That's why you must set the 'Level=Xilinx M1' in the graphic user
interface, or if you are using the command line, the '-L Xilinx' option.)

What Xilinx P&R needs are attributes that read 'INIT=1234'

So I thought that since Pete solved his problem with the EDIFATTS.CFG file
and got it to rename the attributes ( his path is starting with EDIF (and
the EDIF netlist reader), and going back to Viewsim via WIR files and VSM)
I would try the same thing with the EDIF netlist writer. It turns out that
there is no equivalent capability that I can find.

SO ...

If you want to use LUTs or other initialized memory primitive, you need
to set both an attribute such as @INIT to the required value for
simulation to work, and an attribute named INIT for the netlister to
create an EDIF netlist that will be processed correctly by Mx.1i .

For example , a LUT4 might have the following two attributes attached to it:

@INIT=AA55
and
INIT=AA55

You can also set these to differing values, and you will get what you
deserve. While this isn't ideal, it is not to much of a hardship.

As promised, here is the bonus info: EQN

While not documented in either M2.1i or M3.1i systems, you can also place
an attribute with a name of EQN on a LUT, instead of the INIT (but not
instead of the @INIT). This will be passed on to Mx.1i tools, and will
build the logic you want (assuming you were reasonable).

Unfortunately what it wont do is simulate, because the @INIT can only be

For example , a LUT4 might have the following two attributes attached to it:

@INIT=8000
and
EQN=I0*I1*I2*I3

Which would work for both. Unfortunately, figuring out the @INIT value
gets harder for more complex EQN values.  The EQN can include I0 thru I3,
and the following operators:

~     NOT
*     AND
@     XOR
+     OR
(,)   grouping / precedence

So in summary: LUT4 can be simulated, and allows loading a LUT with a hex
value.

Netlists that resulted from hybrid SCH/HDL can be simulated,

EQN can be used to specify functionality, but simulation is a pain.
Either figure out the magic hex number, or go through the hybrid path, to
get the P&R software to give you the translated EQN to Hex conversion.

Philip Freidin


Article: 23724
Subject: 3.3v supply 2.5v chips
From: Pierce Chen <pierceNOpiSPAM@zsmc.com.cn.invalid>
Date: Thu, 06 Jul 2000 04:34:05 -0700
Links: << >>  << T >>  << A >>
Hello,

I am building a test board, in which there are 3.3V and
2.5v devices.

1. Can I use 3.3v voltage for all devices?

2. If using DC-DC for 2.5v devices; Can the output of 2.5v
drive 3.3v devices directly? Do I need pullup output of
2.5v devices?

best regards,
Pierce Chen

* Sent from AltaVista http://www.altavista.com Where you can also find related Web Pages, Images, Audios, Videos, News, and Shopping.  Smart is Beautiful