Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 17450

Article: 17450
Subject: Re: Xilinx Virtex Block Select RAM, is is reg or flow thru output
From: Peter Alfke <peter@xilinx.com>
Date: Wed, 28 Jul 1999 16:59:12 -0700
Links: << >>  << T >>  << A >>


James Yeh wrote:

>
> If you think you can simulate the BRAM,
> Well to quote a Judas Priest song, "You gotta another thing comin."
>

Maybe this will make things clearer:

There have been some questions about the Virtex BlockRAM read operation.

The superficial explanation given in the data book  is correct, although not
complete.

The read operation is synchronous = clocked, different from the read in the
LUT-SelectRAM, where read is simply combinatorial.
Also, BlockRAM is a true dual-port RAM with independent address, data, clock, and
control for each port, while the "dual-port" LUT-RAM is far more restrictive,
although sufficient for things like FIFOs.

Here are additional details:

A BlockRAM read operation is started by clocking in the Read Address.
This starts a dynamic read process. While that process goes on, the
previously read output is still being held stable. After somewhat less than the
specified access time  from the Read Address Clock, the output is allowed to
change to the new value.
So the transition is straight from the old value to the new one, no intermediate
glitches caused by the address decoder.
(This is a self-timed operation. We do similar things in other circuits).

Any UN-CLOCKED read ADDRESS changes after this have NO effect.
That's why we call the read operation "synchronous".

Even a write operation into the same location ( from the other port) does  not
affect the read output, since the reading is a dynamic operation that can only be
started by clocking in a read address.

Hope this helps and puts this thread to bed.

Peter Alfke, Xilinx Applications

Article: 17451
Subject: Re: Partial Reconfiguration?
From: Rickman <spamgoeshere4@yahoo.com>
Date: Wed, 28 Jul 1999 21:04:14 -0400
Links: << >>  << T >>  << A >>
Jim Frenzel wrote:
> 
> I've fallen a bit behind on the latest developments.
> 
> Which FPGAs support partial reconfiguration?
> 
> I see that the Xilinx Virtex parts do to some degree
> and I assume the XC6200 series is no more (or did some
> company pick that up?) ...

The Lucent OR2T and OR3T series claim to support it, but I believe you
are on your own on the software side. They have a one paragraph mention
of it in the data book and that's it!


-- 

Rick Collins

rick.collins@XYarius.com

remove the XY to email me.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 17452
Subject: Re: NRZ Deserializing in Virtex
From: allan.herriman.hates.spam@fujitsu.com.au (Allan Herriman)
Date: Thu, 29 Jul 1999 03:05:23 GMT
Links: << >>  << T >>  << A >>
On 27 Jul 1999 12:08:01 PDT, muzok@nospam.pacbell.net (muzo) wrote:

>Actually it does but in a statistical way. You can look at the
>transitions to generate a timing error and use a VCO/Loop Filter to
>adjust your local clock. This is a very common thing to do in
>communication system design.

Yeah.  I posted a suitable phase detector in my other post in this
thread.  I'm still searching for a phase/frequency detector for NRZ
though.  Does anyone know of one?

Thanks,
Allan.

>Peter Alfke <peter@xilinx.com> wrote:
>
>>
>>
>>Eirik Esp wrote:
>>
>>> I am looking at trying to extract clock & data from a ~100 MHz NRZ serial
>>> data stream.  I was just wondering if anyone has tried this and had any
>>> successes of failures to report?  Any suggestions?
>>
>>I once published a circuit that recovers clock and data from a
>>Manchester-encoded bitstream.
>>
>>To my knowledge, Non-Return-to-Zero does not contain clock information, so you
>>would have to know the clock frequency very precisely.
>>
>>Peter Alfke, Xilinx Applications
>>
>
>muzo
>
>Verilog, ASIC/FPGA and NT Driver Development Consulting (remove nospam from email)

Article: 17453
Subject: Re: Microcomputer buses for use inside FPGA/ASIC devices?
From: "Anthony Ellis - LogicWorks" <aelogic@iafrica.com>
Date: Thu, 29 Jul 1999 07:20:54 +0200
Links: << >>  << T >>  << A >>
Your points are valid. But for FPGA's I am not entirly convinced. Some
issues?
    (1) I can't envisage the FPGA system that can utilise 8Gbytes/sec as a
bus. This is like having a quad 603E in an FPGA.
    (2) A serial bus is  relatively more independent of the
consumer/generator bus width and byte ordering.
    (3) My experience  with FPGA's  lead me to believe that connecting 1/2
dozen or so peripherals along a parallel 64 bit wide bus
          will kill the expected performance after layout/routing etc. On
paper and in specs everything looks easy.
    (4) Any idea of the power consumption of a 64 bit bus running at 1 Ghz
in an FPGA?
    (5) Having a point-to-point serial loop architecture will always be
easier to guarantee the bus link speed as it is basically
          a register to register delay with one routing path inbetween. My
guess is that if one finds an FPGA that can clock a wide parallel bus
          across and internal VME type bus at 1 Ghz  then the same FPGA will
clock a point-to-point loop at 4 GHz.
    (6) Parallel busses may be more efficient in gate count but most systems
these days generally tend to interface the parallel bus to a dual
         port memory. The FPGA architetcure's ability to have many
distributed simple FIFO's make implementing a serial loop as practical
         and gate efficient as such a parallel bus - look how many gates it
takes to implement PCI 32. Two or three of these on a FPGA
         and you have enough gate left to implement a copule of Z80's.

Wade D. Peterson wrote in message <7nn0et$gu0$1@news1.tc.umn.edu>...
>Anthony Ellis - LogicWorks <aelogic@iafrica.com> wrote in message
>news:7nm29r$nrg$1@nnrp01.ops.uunet.co.za...
>> My own pennys worth here.
>>
>> I have the feeling that the current move in uComputer busses is away from
>> parallel but back to serial. Witness NGIO, FutureIO, FibreChannel etc.
>> Going parallel 8/16/32/64 with byte ordering etc..etc is a step backwards
>> for future efforts. The way forward is only to look at the past for what
not
>> to do next.
>
>As a general rule, I don't disagree with that...at least for (non
intra-chip)
>system-level work.
>
>Remember, though, that we're talking here about uC buses inside of an FPGA
or
>ASIC (for system-on-chip).  We're routing the uC bus across the surface of
the
>die.  In this situation the parallel buses will always be much faster and
>require less logic than a serial bus.
>
>FPGA and ASIC design rules are easily allowing 50 - 100 MHz buses across
the
>die, and some of the latest product introductions are starting to nudge up
to
>the 1 Ghz toggle rates.  I'm looking a little bit into the future here, but
if
>we could get a 1 Ghz system clock with a 64-bit wide data bus, we're
talking
>about a 8 Gbyte/second data rate.  Also note that I'm saying Gbyte, and not
>Gbit.
>
>Buses like VMEbus, PCI and cPCI, NGIO and so forth all have inherent speed
>limits that are caused by large inductance and capacitance caused by IC
pins, PC
>boards and connectors.  With system-on-chip, the inductance and capacitance
of
>the interconnections are a small fraction of what they are with these other
>technologies, and are inherently faster.
>
>
>> An 8 bit wide "serial-link" interface would suffice for most applications
>> providing the protocol is simple.  At 50 Mhz this gives 400 Mbits/sec at
100
>> Mhz you can knock VME.
>> This would also allow an external chip-chip and board to board link using
>> the same "network" using say LVDS I/O.
>>
>> Anthony.
>
>I agree with your numbers that an 8-bit wide serial interconnect (such as
>Myrinet) could get you to 400 Mbit/sec = 50 Mbyte/sec (actually, you can
get
>about 160 Mbyte/sec on a single Myrinet link).  However, this is still a
far cry
>from the cutting edge of VMEbus, which is now pushing 500 Mbyte/sec.  For
more
>information about the high speed versions of VMEbus, see the VME320
technology
>at http://www.arizonadigital.com/
>
>Actually, I don't see the internal FPGA/ASIC buses as really competing with
the
>backplane/cable buses.  They solve a different set of problems.  However,
the
>data rates of these other buses are fundamentally limited as to how fast
data
>can be delivered to them.  In this case, we're still back to how fast we
can
>move data on system-on-chip.
>
>Wade D. Peterson
>Silicore Corporation
>3525 E. 27th St. No. 301, Minneapolis, MN USA 55406
>TEL: (612) 722-3815, FAX: (612) 722-5841
>URL: http://www.silicore.net/  E-MAIL: peter299@maroon.tc.umn.edu
>
>
>


Article: 17454
Subject: Re: Microcomputer buses for use inside FPGA/ASIC devices?
From: nospam_ees1ht@ee.surrey.ac.uk (Hans)
Date: 29 Jul 1999 07:27:23 GMT
Links: << >>  << T >>  << A >>
In article <7nl93u$92n$1@news1.tc.umn.edu>, peter299@maroon.tc.umn.edu says...
>
>Many thanks to Jim Frenzel for supplying this link:
>
>The ARM AMBA spec can be downloaded from 
>http://www.arm.com/Documentation/UserMans/AMBA/index.html.
>
>The early spec was also described in an IEEE Micro article,
>Jul/Aug 97.
>
>I've added this to the database at http://www.silicore.net/uCbusum.htm
>
>-- 
>Wade D. Peterson
>Silicore Corporation
>3525 E. 27th St. No. 301, Minneapolis, MN USA 55406
>TEL: (612) 722-3815, FAX: (612) 722-5841
>URL: http://www.silicore.net/  E-MAIL: peter299@maroon.tc.umn.edu
>
>
Wade,

There is also an asynchronous SoC Bus bus called Marble. It is developed by 
Manchester University for the asynchronous Amulet processor family.

Sorry no link,

Hans.

Article: 17455
Subject: FilterExpress Filter Synthesis Software
From: "Gareth Jones" <gareth@systolix.co.uk>
Date: Thu, 29 Jul 1999 09:33:48 +0100
Links: << >>  << T >>  << A >>
Systolix recently release a free version of their filter synthesis tool,
FilterExpress, for download from their website. Unfortunately due to the
large response, and some early difficulties with the automatic registration,
we may have lost some of the requests. We have now fixed this problem.

If you registered FilterExpress, but have not yet received a key, either
re-register at http://www.systolix.co.uk/swdownload.htm or mail me directly
and I'll make sure you get a key immediately.

Our apologies for the inconvenience

Gareth Jones

------------------------------
 Systolix Ltd.
 Tel : +44 151 242 0600
 Fax : +44 151 242 0609
 www.systolix.co.uk
------------------------------



Article: 17456
Subject: Re: Partial Reconfiguration?
From: Ray Andraka <randraka@ids.net>
Date: Thu, 29 Jul 1999 08:29:52 -0400
Links: << >>  << T >>  << A >>
Atmel 40K and 6K, Xilinx Virtex and 6200 (discontinued), Lucent Orca2
and 3, Motorola (discontinued).  Of these, only Atmel really documents
the partial configuration more than a paragraph or two, and even that
documentation isn't really enough.

Jim Frenzel wrote:

> I've fallen a bit behind on the latest developments.
>
> Which FPGAs support partial reconfiguration?
>
> I see that the Xilinx Virtex parts do to some degree
> and I assume the XC6200 series is no more (or did some
> company pick that up?) ...
>
> --
> Jim Frenzel, Assoc. Professor      phone:            (208) 885-7532
> Electrical Engineering, BEL 213    fax:              (208) 885-7579
> University of Idaho, MS 1023       email:       jfrenzel@uidaho.edu
> Moscow, ID 83844-1023 USA          www:    www.uidaho.edu/~jfrenzel



--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 17457
Subject: Re: Xilinx Virtex Block Select RAM, is is reg or flow thru output
From: brian@shapes.demon.co.uk (Brian Drummond)
Date: Thu, 29 Jul 1999 12:51:35 GMT
Links: << >>  << T >>  << A >>
On Wed, 28 Jul 1999 16:59:12 -0700, Peter Alfke <peter@xilinx.com>
wrote:

>
>
>James Yeh wrote:
>
>>
>> If you think you can simulate the BRAM,
>> Well to quote a Judas Priest song, "You gotta another thing comin."
>>
>
>Maybe this will make things clearer:
>
>There have been some questions about the Virtex BlockRAM read operation.
>
>The superficial explanation given in the data book  is correct, although not
>complete.
>
>The read operation is synchronous = clocked, different from the read in the
>LUT-SelectRAM, where read is simply combinatorial.

[...]

>Here are additional details:
>
>A BlockRAM read operation is started by clocking in the Read Address.
>This starts a dynamic read process. While that process goes on, the
>previously read output is still being held stable. After somewhat less than the
>specified access time  from the Read Address Clock, the output is allowed to
>change to the new value.
>(This is a self-timed operation. We do similar things in other circuits).

This differs a little from your earlier clarification, which was that
the outputs were registered, which (in the context of most SSRAMs)
suggests two clocks before you see the new output. But I think this is
what the original poster was trying to clear up.

>Any UN-CLOCKED read ADDRESS changes after this have NO effect.
>That's why we call the read operation "synchronous".

OK - so far it looks like a synchronous RAM, of the flow-through type.

>Even a write operation into the same location ( from the other port) does  not
>affect the read output, since the reading is a dynamic operation that can only be
>started by clocking in a read address.

But now it doesn't!

I am starting to wonder if this isn't actually quite unlike both the
standard (usually single-port) varieties of synchronous SRAM (standard
in the sense that SRAM devices of these types are available from several
manufacturers)

1: Flowthrough.
The read address is registered: some time after a new address is clocked
in, the read data will change. However one would expect a subsequent
write to that address to cause the data outputs to change.

2. Pipelined.
The read address is registered: so are the data outputs. Sometime after
a new address is clocked ... nothing happens (that can be seen outside
the device). Until the _second_ clock edge, when the data changes after
a very short time.

I started to wonder if the BlockRAM was

3: the address is unregistered, but the data outputs are registered.
This would mimic the operation you describe, but wouldn't account for
the long clock-output times.

Which, along with your phrase "self-timed operation" leads me to think
that it is actually

4: Pipelined, but with an internal delay line to generate the second
clock (the data output clock) from the first (address clock). Thus
although it is pipelined, it is functionally equivalent to the
flowthrough device, (modulo subsequent writes) in that the data appears
after only a single clock.

This might explain why neither of the standard terms quite nails the
description. 

5. Or is it something else again?
 
>Hope this helps and puts this thread to bed.

Indeed.

By the way, where else do you use self-timed operations?

- Brian

Article: 17458
Subject: Re: APEX initial values
From: s_clubb@NOSPAMnetcomuk.co.uk (Stuart Clubb)
Date: Thu, 29 Jul 1999 13:18:06 GMT
Links: << >>  << T >>  << A >>
On Tue, 27 Jul 1999 15:23:34 -0500, Tom McLaughlin
<tomm@arl.wustl.edu> wrote:

>Hello,
>We are currently using Xilinx Virtex FPGAs, but are evaluating the new
>Altera APEX FPGAs.  One question we have is whether or not you can set
>initial values in the embedded RAM in the APEX devices.  You can do this
>in the Virtex BlockSelect RAM and it has become very convienient for us.

You can, using either a hex file (mandatory for simulation models) or
a memory initialisation file (.mif)

Cheers
Stuart
For Email remove "NOSPAM" from the address
Article: 17459
Subject: Re: Microcomputer buses for use inside FPGA/ASIC devices?
From: "Wade D. Peterson" <peter299@maroon.tc.umn.edu>
Date: Thu, 29 Jul 1999 08:29:59 -0500
Links: << >>  << T >>  << A >>
Anthony Ellis - LogicWorks <aelogic@iafrica.com> wrote in message
news:7nooaq$1ir3$1@nnrp01.ops.uunet.co.za...
> Your points are valid. But for FPGA's I am not entirly convinced. Some
> issues?
>     (1) I can't envisage the FPGA system that can utilise 8Gbytes/sec as a
> bus. This is like having a quad 603E in an FPGA.

I agree that today, it would be impossible to do an 8 Gbyte/sec FPGA bus at
64-bits.  However, technology is a bit like skeet shooting...you've got to lead
the target if you're going to hit it.  We do know that FPGA technology is
following Moore's law.  Every 18 months the price will drop in half and the
performance will double.  I know that Xilinx has announced that they hit 1 GHz
toggle rates in experimental parts, and I believe that they and others will hit
that number on production parts sometime in the future.

FPGA parts are surprisingly fast already, however.  We did a microprocessor
design on an FPGA part, and were pleasently surprised to find that it ran twice
as fast (on garden variety parts and speeds) as the full custom version of the
same microprocessor (from another company).

Also, we're not alone in the quest for a parallel microcomputer bus for
FPGA/ASIC.  I just updated my summary of other FPGA/ASIC microcomputer buses
(see: http://www.silicore.net/uCbusum.htm ), and we've got pretty good company
in our quest, namely IBM and ARM.


>     (2) A serial bus is  relatively more independent of the
> consumer/generator bus width and byte ordering.

The byte ordering problem really doesn't bother me that much.  That's pretty
much a housekeeping duty.  As far as serial vs. parallel architectures go, it's
always been my experience that parallel architectures will always have an
inherent speed advantage over serial.  Things go faster when they're done in
parallel.


>     (3) My experience  with FPGA's  lead me to believe that connecting 1/2
> dozen or so peripherals along a parallel 64 bit wide bus
>           will kill the expected performance after layout/routing etc. On
> paper and in specs everything looks easy.

We recently did an FPGA interconnection with eight slave devices, and things
went just fine.  That one had a 40 Mhz internal bus, and we routed on the
slowest speed grade of parts (to save money).  One trick that I've learned is
that it's more efficient to use multiplexor routing than three-state routing.
This is counter-intuitive if you're used to three state buses.  However, we
found that the place and route was a snap.

That's one reason we are designing Wishbone to support both multiplexor and
three-state buses.  You can certainly do three-state buses on an FPGA, but in
our work we've found that multiplexor buses are much faster, are surprisingly
compact, and are more portable than three-state buses.

It's quite interesting to watch how the place and route tools handle the
multiplexor buses.  You end up with the least significant data bit over on one
end of the die, and the next bit somewhere totally different.  There doesn't
seem to be any rhyme or reason how it places the buses, but they're pretty good
at finding an optimal solution.


>     (4) Any idea of the power consumption of a 64 bit bus running at 1 Ghz
> in an FPGA?

Nope.  It's just like everything else in the computer industry, though.  You
tend to pay for speed with money and power consumption.  I think a good goal in
a general purpose bus is to provide a variety of price/performance points.  The
upper end is for the speed freaks.


>     (5) Having a point-to-point serial loop architecture will always be
> easier to guarantee the bus link speed as it is basically
>           a register to register delay with one routing path inbetween. My
> guess is that if one finds an FPGA that can clock a wide parallel bus
>           across and internal VME type bus at 1 Ghz  then the same FPGA will
> clock a point-to-point loop at 4 GHz.

I agree.  However, a 32-bit or 64-bit data operand will also be delivered more
slowly on the serial interconnect.


>     (6) Parallel busses may be more efficient in gate count but most systems
> these days generally tend to interface the parallel bus to a dual
>          port memory. The FPGA architetcure's ability to have many
> distributed simple FIFO's make implementing a serial loop as practical
>          and gate efficient as such a parallel bus - look how many gates it
> takes to implement PCI 32. Two or three of these on a FPGA
>          and you have enough gate left to implement a copule of Z80's.
>

Well, you can do dual ported memory in FPGAs pretty easily.  Some of the more
recent parts from Xilinx (Vertex), Altera (APEX) and Lucent have great dual
ported memories.  They're pretty fast, and are getting bigger too.  We've gotten
pretty good dual ported memories out of the older parts, too.

Don't get me wrong...the serial architectures make a lot of sense for large
systems.  They are less expensive for large distributed systems (such as
wide-area and telephone networks), they take less weight (for aircraft and
automobiles) and are easy to connect together.  I don't think those advantages
apply at the die level, though.

--
Wade D. Peterson
Silicore Corporation
3525 E. 27th St. No. 301, Minneapolis, MN USA 55406
TEL: (612) 722-3815, FAX: (612) 722-5841
URL: http://www.silicore.net/  E-MAIL: peter299@maroon.tc.umn.edu


Article: 17460
Subject: Re: Partial Reconfiguration?
From: bdipert@NOSPAM.pacbell.net (Brian Dipert)
Date: Thu, 29 Jul 1999 13:41:36 GMT
Links: << >>  << T >>  << A >>
Jim,
I've got an article on reconfigurable devices (and associated
software), including partially reconfigurable architectures, coming up
in the August 5 issue of EDN. Feedback always appreciated

>I've fallen a bit behind on the latest developments.
>
>Which FPGAs support partial reconfiguration?
>
>I see that the Xilinx Virtex parts do to some degree
>and I assume the XC6200 series is no more (or did some
>company pick that up?) ...

Brian Dipert
Technical Editor: Memory, Multimedia and Programmable Logic
EDN Magazine: The Design Magazine Of The Electronics Industry
http://www.ednmag.com
1864 52nd Street
Sacramento, CA   95819
(916) 454-5242 (voice), (916) 454-5101 (fax)
***REMOVE 'NOSPAM.' FROM EMAIL ADDRESS TO REPLY***
mailto:bdipert@NOSPAM.pacbell.net
Visit me at http://members.aol.com/bdipert
Article: 17461
Subject: Re: Xilinx Virtex Block Select RAM, is is reg or flow thru output
From: mcgett@xilinx.com (Ed Mcgettigan)
Date: 29 Jul 1999 08:42:37 -0700
Links: << >>  << T >>  << A >>
In article <37a04dbd.17633595@news.demon.co.uk>,
Brian Drummond <brian@shapes.demon.co.uk> wrote:
>
>But now it doesn't!
>
>I am starting to wonder if this isn't actually quite unlike both the
>standard (usually single-port) varieties of synchronous SRAM (standard
>in the sense that SRAM devices of these types are available from several
>manufacturers)
>

You're right, the Virtex Block SelectRAM cells (RAMB4*) have several
unique features to them and can be configured as either single port
or dual-port memories of the same width or different widths (16 on PortA,
4 on PortB for example).   

The confusion on this thread is stemming from a mixture of single
port RAM terms and dual-port functionality.  I'll make an attempt 
to clear up the confusion, maybe I'll even be successful. :)

As I see it, clocked dual-port memories can have the following 
attributes:

Read Through (1 clock edge) 
---------------------------
The read address is registered on the read port clock edge and data appears 
on the output after the RAM access time.  

   Some memories may place the latch/register at the outputs depending
   on the desire to have a faster clock-to-out vs setup.  This is generally 
   consider to be an inferior solution since it changes the read operation 
   to an asynchronous function with the possibility of missing an 
   address/control transition during the generation of the read pulse 
   clock.

Read Pipelined (2 clock edges)
------------------------------
The read address is registered on the read port clock edge and the data is
registered and appears on the output after the second read clock edge.

Write Back (1 clock edge)
-------------------------
The write address is registered on the write port clock edge and the data 
input is mirrored on the write port output. Data is written to the memory
in the same cycle.

Write Through
-------------------------
The write address is registered on the write port clock edge and the data
is mirrored on the read port data output if the write and read addresses
match. Data is written to the memory in the same cycle. This is NOT 
supported in Virtex, but is available on some commercial RAMs.
 
 
The Virtex Block SelectRAM have the "Read Through" and "Write Back"
functions.  The "Read Pipelined" function can be done by simply adding
CLB registers to the outputs to improve clock-to-out timing.


In summary the Block SelectRAMs have the following characteristics:

   1) All inputs are registered with the port clock and have a 
      setup-to-clock timing specification.
   
   2) All outputs have a Read Through or Write Back function depending
      on the state of the port WR pin and are available after the 
      clock-to-out timing specification relative to the port clock.
     
         As a minor note, the outputs are latched using a self-timed
         circuit to provide a glitchless output transition.
         
   3) The Block SelectRAMs are true SRAM memories and do not have a
      combinatorial path from the address to output.  The LUT SelectRAM
      cells in the CLBs are still available with this function.
      
   4) Write Through is not available, nor desirable when using 
      different clocks for the write and read ports.
      
   5) The ports are completely independent from each other (ie;
      clocking, control, address, read/write function, and data width)
      without arbitration.
      
   6) A write operation requires only 1 clock edge.
   
   7) A read operation requires only 1 clock edge.
   
   8) Data on the port outputs will not change until the port does 
      another read or write operation. (ie; port is enabled and is
      clocked).
      
Simulation models are available for VHDL, Verilong and the Xilinx
Foundation simulator.


Ed
Article: 17462
Subject: Re: Xilinx Virtex Block Select RAM, is is reg or flow thru output
From: Peter Alfke <peter@xilinx.com>
Date: Thu, 29 Jul 1999 11:42:36 -0700
Links: << >>  << T >>  << A >>


Brian Drummond wrote:

> I am starting to wonder if this isn't actually quite unlike both the
> standard (usually single-port) varieties of synchronous SRAM (standard
> in the sense that SRAM devices of these types are available from several
> manufacturers)
>
> <snip>
>
> Which, along with your phrase "self-timed operation" leads me to think
> that it is actually
>
> 4: Pipelined, but with an internal delay line to generate the second
> clock (the data output clock) from the first (address clock). Thus
> although it is pipelined, it is functionally equivalent to the
> flowthrough device, (modulo subsequent writes) in that the data appears
> after only a single clock.

I agree with that.

>
>
> This might explain why neither of the standard terms quite nails the
> description.

Well, we try to be innovative, provided it results in a better, more useful device.
Standard nomenclature is not always applicable...

>
> By the way, where else do you use self-timed operations?

For the synchronous write pulse generation in the LUT-RAM, and also for the new
shift-register option (SR-LUT)  in Virtex, ( Did you know that you can use any
4-input LUT in Virtex as a shift register of length 1...16, as defined by the four
address inputs ? Makes a very efficient shift register with zero overhead. I am
writing an app note describing efficient LFSRs and CRC circuits using the SR-LUT))

Peter Alfke


Article: 17463
Subject: Xilinx timing constraints question
From: Jeff Streznetcky <jeff.streznetcky@lmNOSPAMco.com>
Date: Thu, 29 Jul 1999 16:11:47 -0400
Links: << >>  << T >>  << A >>
I have the following situation.  I am receiving data into a
Virtex device at a 50Mhz clock rate.  This data is valid for a
total time of 7ns around the rising edge of the clock (3.5ns on
either side, see waveforms below).

            ____                  ____
DATA   ____/    \________________/    \______
           \____/                \____/
               __________            __________
CLK    _______/          \__________/


I am using the following constraint to attempt to guarantee that
I receive the data correctly:
    NET "rxClkIn" TNM_NET = "rxClkIn";
    OFFSET = IN 3.5 ns BEFORE "rxClkIn";

The problem I am experiencing is that this clock net has to be
placed on local routing (I cannot use a global clock net).  The
problem this presents is that the clock delay can be greater than the
input delay (pad -> f/f).  In this case the constraint listed above is
met, and I now have a negative setup/hold time requirement w.r.t the
clock.  This situation will not work if the
hold time requirement is greater than 3.5ns.

For example, assume it takes 7ns for the clock to get from the
pad to the input F/F of the device and that it takes 2ns for the
data to propagate from the pad to the input of the F/F (plus any
setup time required).  This will require 5ns of hold time around
the rising edge of the clock.  I only have 3.5ns of hold time to
work with.

I have looked at Xilinx answer #4188
(http://www.xilinx.com/techdocs/4188.htm) but this does not help
in my case.  Solution 1 will require me to put a <3.5 + data
input prop delay + setup time on input f/f> ns constraint on the
input clock.  What number will this be?  I have run TRCE on my
design and it appears that Tiopickd (pad -> f/f setup time, w/
delay) is 5.229 ns for the particular device I am using.  Does
this imply that I should put a 5.229 + 3.5 ns constraint on the
clock net?  If this is the case, what happens when I go to a
different device / speed grade part?  Then I will have to
re-adjust all of my constraints?  I do not like this solution.  I
have not even considered solution 2, that solution seems hokey to
me.

Ultimately I would like to construct some timing constraints
which will guarantee the design will work when the constraints
are met.  How can I construct my constraints to guarantee my
design will work properly?

Thanks.


-Jeff



Article: 17464
Subject: Re: Digital modulator? Synthesisable Sin(x) funct.
From: melus@esi.us.es (Luis Yanes)
Date: Thu, 29 Jul 1999 23:14:18 GMT
Links: << >>  << T >>  << A >>
On Wed, 28 Jul 1999 18:00:14 -0400 Ray Andraka <randraka@ids.net>
wrote:

>At your low data rate, you could use an iterative serial CORDIC to do the
>modulation.  In my CORDIC survey paper, (available on my website) I show an
>example of a bit serial iterative CORDIC that fits in 22 CLBs in a xilinx 4K or
>spartan device.  That one will handle bit rates over 100 Mb/sec in current
>devices, so the possible data rates are much higher than your 8Ks/s needs.  It
>doesn't map as nicely into the 5200 series because it uses the CLB RAM for the
>shift registers.  Still, I think an iterative approach may fit into the 5202.

I supouse that I was wrong, but these 100M/s are at the carrier
frequency with the modulation. My 8K/s are the baseband modulation
signal over a carrier that I like to have better arround 5-6MHz.
Anyway low enought, I hope. 

>If you use the CORDIC approach, you don't need a multiplier!  The rotation mode
>CORDIC takes an input vector and rotates it by a phase angle (also an input).  If
>you feed the I component of the CORDIC input with your signal and the  component
>with zero, then rotation by a moving phase angle will modulate the input signal.

Ok, right. I didn't read all last time. Its a very clever algorithm,
with amazing posibilities with very small modifications! With it I got
the sine, cosine and multiplication in a single block!. And guess that
for sin/cos the algorithm are unconditionally convergent, not like for
the hyperbolic functions.

Now I'm learning about CORDIC. (Got your paper and some worked example
from http://devil.ece.utexas.edu/tutorial/index.html , althought seems
very oversimplified, examples to do by hand always help to understand
how it works).

>One more thing, check to make sure you can compile the 5200 series with F1.3.
>I'm not sure the 5200 series is supported under M1.  You may have to use the old
>xact6 software to use it.  You should be able to get the spartan parts in small
>quantities from a reseller such as hamilton avnet.  Try http://www.avnet.com .
>The foundation tools are upto 1.5i.  You should be able to download updates from
>the web (although I'm not sure you can with the student edition).  Xilinx has
>just released 2.1 which has added capabilities and improved PAR.

No, F1.3 doesn't support XC5200 series, that is becouse I used the
F1.4 from the school. My Foundation package aren't the student
edition, but the Foundation Base (without vhdl :-( ) that costed me
about US$ 150.

I searched the Xilinx site for updates, but I couldn't find any, only
patches and speed files, but none that will upgrade to F1.4 or up.
Asked the local distribuitor here, where I bought the package, told me
that I must buy another full package, the same I own but newer version
of software!

Thanks for the reference, I'll look there to buy a bigger spartan
fpga.

73's de Luis

mail: melus@esi.us.es
Ampr: eb7gwl.ampr.org
http://www.esi.us.es/~melus/   <- Homebrewed Hardware Projects with PCBs
Article: 17465
Subject: Re: Partial Reconfiguration?
From: Tom Kean <tom@algotronix.com>
Date: Fri, 30 Jul 1999 01:09:15 +0100
Links: << >>  << T >>  << A >>
This is a multi-part message in MIME format.
--------------5A7BCD6FB25A2BA469C85F23
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit



Ray Andraka wrote:
> 
> Atmel 40K and 6K, Xilinx Virtex and 6200 (discontinued), Lucent Orca2
> and 3, Motorola (discontinued).  Of these, only Atmel really documents
> the partial configuration more than a paragraph or two, and even that
> documentation isn't really enough.
> 

The XC6200 documented the programming interface in great detail.  Almost
all the recent academic work on partial reprogrammability which used commercial
chips used XC6200 and a lot of good papers were published in FCCM and FPL.

But as you say Xilinx chose to kill it - and today there are no commercial
chips that seriously attempt to do partial reconfigurability.  Atmel comes
the closest but their stuff is not competitive with the newer Xilinx or 
Altera chips in any other respect.

Tom.
--------------5A7BCD6FB25A2BA469C85F23
Content-Type: text/x-vcard; charset=us-ascii;
 name="tom.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Tom Kean
Content-Disposition: attachment;
 filename="tom.vcf"

begin:vcard 
n:Kean;Tom
tel;fax:UK +44 131 556 9247
tel;work:UK +44 131 556 9242
x-mozilla-html:TRUE
org:Algotronix Ltd.
adr:;;P.O. Box 23116;Edinburgh;;EH8 8YB;Scotland
version:2.1
email;internet:tom@algotronix.com
title:Director
note:Web Site: www.algotronix.com
x-mozilla-cpt:;4768
fn:Tom Kean
end:vcard

--------------5A7BCD6FB25A2BA469C85F23--

Article: 17466
Subject: Re: Partial Reconfiguration?
From: Ray Andraka <randraka@ids.net>
Date: Thu, 29 Jul 1999 20:10:46 -0400
Links: << >>  << T >>  << A >>


Tom Kean wrote:

> Ray Andraka wrote:
> >
> > Atmel 40K and 6K, Xilinx Virtex and 6200 (discontinued), Lucent Orca2
> > and 3, Motorola (discontinued).  Of these, only Atmel really documents
> > the partial configuration more than a paragraph or two, and even that
> > documentation isn't really enough.
> >
>
> The XC6200 documented the programming interface in great detail.  Almost
> all the recent academic work on partial reprogrammability which used commercial
> chips used XC6200 and a lot of good papers were published in FCCM and FPL.
>
> But as you say Xilinx chose to kill it - and today there are no commercial
> chips that seriously attempt to do partial reconfigurability.  Atmel comes
> the closest but their stuff is not competitive with the newer Xilinx or
> Altera chips in any other respect.
>

Agreed.  The lack of a carry chain in the Atmel chips is a very serious handicap
for anything arithmetic.  XC6200 documentation should be a model for the others to
follow.  BTW, that chip had its fair share of architectural problems for real world
designs.  If I had to pick one right now, I'd probably go with the Virtex unless I
knew I could get by without the features missing in Atmel (not likely in most of my
recent work).

> Tom.
>
>   ------------------------------------------------------------------------
>
>   Tom Kean <tom@algotronix.com>
>   Director
>   Algotronix Ltd.
>
>   Tom Kean
>   Director         <tom@algotronix.com>
>   Algotronix Ltd.  HTML Mail
>   P.O. Box 23116   Fax: UK +44 131 556 9247
>   Edinburgh        Work: UK +44 131 556 9242
>   EH8 8YB          Netscape Conference Address
>   Scotland
>   Web Site: www.algotronix.com
>   Additional Information:
>   Last Name       Kean
>   First Name      Tom
>   Version         2.1



--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 17467
Subject: Re: Epld to Fpga design.
From: "Andy Peters" <apeters@noao.edu.nospam>
Date: Thu, 29 Jul 1999 19:25:03 -0700
Links: << >>  << T >>  << A >>
ProSyst wrote in message <7nhl7h$3v3$1@elric.planete.net>...
>Hello,
>            I search soft and experience to translate XC7272 PC84 to XC9572
>PC84 design.
>            (I am a french technicien with no experience in the epld and
>fpga chip).
>Thanks to reply at :
>                                rdruesne@prosyst.fr


If you have the original design sources (schematics, VHDL or ABEL code), why
not run them through the tools and target them towards the 9572?

Or do you not have the tools?
--
----------------------------------------------------------------------------
--
Andy Peters
Sr Electrical Engineer
National Optical Astronomy Observatories
apeters (at) noao.edu

"I'm not judging you, I'm judging me."
-- Mission of Burma, "Academy Fight Song"



Article: 17468
Subject: Re: Problem with Max+PlusII / Flex10k
From: "Carlhermann Schlehaus" <carlhermann.schlehaus@t-online.de>
Date: Fri, 30 Jul 1999 05:13:55 +0200
Links: << >>  << T >>  << A >>
Hi,


Nicolas Matringe <nicolas@dot.com.fr> schrieb in im Newsbeitrag:
379F2031.2989E6CC@dot.com.fr...

[...]
>
> I finally found a workaround by specifying the version number at reset:
> ...
>    PROCESS (clk, rst_i)
>     BEGIN
>       IF (rst_i = '1') THEN
>         d_out <= "00000000" & ver_num & "00";
> ...
>
> That doesn't look "clean" to me but it works
>

Well, please take a look at 'Global Project Logic Options'. Do you have
specified the preset / clear / Clk to be global?
Do you get a compiler message "Presetable Registers will power up ....",
especially do they power up low?
I know those problems with FLEX6K devices, as they have just a single preset
latch at LE output and thus performing a reset to '0' with a '0' preset and
an additional latch.
Specification of register contents only at power up (as you did it in your
code) sometimes interfere with the power up preset of register contents (as
far as I can imagine), thus preventing correct register contents.
When you perform the register load with every reset, you have this
workaraound, that after the initial power up sequence you do active change
the register content.
I wouldn't regard this to be the best way how to preset registers at power
up, but this seems to be the only way to get things working.
In one of my designs (just a counter structure with enable and preset) I
have such strange behavior even in simulation.
I wrote a counter design, presetting the counter at power up. The design was
compiled for the 6K and in simulation the counter was preset to "1111111.."
till the first active reset to the initial value was performed. Compiling
and simulationg the same VHDL code with the 8K works fine !
Very strange was, that the preset is synchronous, the preset condition was
active at power up, but the preset didn't perform. Only with asynchronous
preset it worked!
This problem was transferred to ALTERA, but never solved AFAIK.

Perhaps ALTERA should no longer invest in it's own VHDL compiler, but use
this money to programm good interface for third party tools (like Synplify
or Leonardo Express) and include this software in VHDL-enabled
environments...

CU, CS






Article: 17469
Subject: Re: NRZ Deserializing in Virtex
From: murray@pa.dec.com (Hal Murray)
Date: 30 Jul 1999 08:45:43 GMT
Links: << >>  << T >>  << A >>

In article <7nk6gp$603$1@pyrite.mv.net>,
 "Eirik Esp" <eirik@proxima.mv.com> writes:
> I am looking at trying to extract clock & data from a ~100 MHz NRZ serial
> data stream.  I was just wondering if anyone has tried this and had any
> successes of failures to report?  Any suggestions?

I don't think you have specified quite enough info to get a criap answer.

The standard problem with recovering an NRZ signal is what to do
if you have a long series of bits with no transitions.  Suppose you
send 100 "1" bits.  How does the receiver tell if you sent 99,
100, or 101 bits?

The usual answer is for the transmitter to promise not to do that,
or at least not do it in any case where it matters.


The usual way to implement the receive side is to use a clock that
runs at 8x or 16x the transmitter bit clock rate.  If you spend an hour
or so with some graph paper you can probably work it out.

If/when you see a high-low or low-high transition, you know that is
the boundary between two bit cells.  But you only know it within 1 receive
clock period.  So the logic goes somethng like this:

  Wait for a transition.  That's the start of a bit cell.
  Wait a 1/2 bit time.  Grab a data sample.
  Wait another bit time, watching for a transition.
    If you see a transition, you now have an up to date
    reference for the edge of a bit cell.  Resync your counter.
  Grab the next bit sample.
  Loop back.


If you are processing something like standard RS232 async characters
(1 start bit, 8 data bits, 1 stop bit) then you have to coast at most
8 or 9 bits.  That's pretty easy to do with crystals on both ends.


The other question is how clean is your input data signal?  Does the
signal look like a square wave on a scope, or more like an eye pattern?
If you have a fuzzy eye pattern, you probably need an analog PLL to do
the clock recover.  If you have a clean signal, simple digital logic
will work OK.

Another thing to consider is how reliable/solid does your deisgn
have to be.  Are you building something that just has to work
for an hour or two so you can collect some data and finish your
thesis?  Or are you building a product that will go into millions
of homes and your phone will ring every time it drops a character?


If the input signal is pretty clean, you might be able to run at 100 MHz
with a modern FPGA.  Suppose you can do it with 4 samples per bit time.

You can get that by using a 4x clock or both edges of a 2x clock.

Or you could use both edges of a 1x clock and a 1/4 bit time delay.
The idea is to feed in the data signal and the delayed signal, and
clock both on both edges of the 1x clock.  Then run all 4 samples into
a state machine that decodes things.

Don't try to use internal routing to get the 1/4 bit time delay.
FPGAs consider that sort of approach to be evil.  I'd use an external
delay line.  At 100 MHz, a bit cell is 10 ns so a 1/4 bit cell is 2.5 ns.
That's only a few inches of trace on a PCB.  Simple and solid.  [If
you are only building 1 board, consider using a short hunk of coax.
That way you can adjust the length/time if needed.]

It may take a lot of paper/pencil to work out how to do things.  It will
be pretty obvious after you see it.

Work on the 8x or 16x case first.  The state machine has 4 or 5 bits
of internal state.  It gets 1 input data sample.  It puts out 2 bits:
the 0/1 for the output data sample, and another bit to say when that
sample is valid.  (Think of it as a clock enable on the shift register.)

The usual way to think of the internal state is 1 bit to remember
the previous input data value and n bits to count the number of
ticks since the last bit cell edge.  Whenever you see a change
you can reset the last-edge count.  If no change, the state
advances a step.  If it gets to the end of a bit cell without seeing
a change, you assume there is no change and reset the counter.
[Be sure to consider the case where you see a transition early in a bit
cell.  It's probably just late transition from the previous bit.]

----

If you can find any documentation for the Zilog 8030/8530 SCC chip
that may help you understand things.  That chip was used in the first
several generations of Macs.  A lot of people know how to use/abuse
it.

-- 
These are my opinions, not necessarily my employers.
Article: 17470
Subject: Semi-deterministic behaviour in FPGA's
From: "Mark Grindell" <petejackson7@hotmail.com>
Date: Fri, 30 Jul 1999 10:07:47 +0100
Links: << >>  << T >>  << A >>
Dear all,

I realize that a lot of effort has gone into the characterization of flip
flops and timed circuits, in many cases for the sake of understanding the
nature of metastable states. Of course, this is to allow design practices to
reduce the probability of these states and to increase the probability of
predictable behavior.

However, in some cases, it might be desirable to have non-deterministic
behavior. One application for this might be the creation of random keys in
an encryption circuit.

I can imagine two particular scenarios where this might be possible, and I
wonder if anyone might care to comment on them.

It is possible to create cross-coupled structures in which two or more logic
cells act as an oscillator. If the feedback path is fairly complex,
distributed over several paths in the chip, the operating frequency might be
quite a sensitive function of the operating temperature and the chip
manufacturing batch number, and so on. Two such oscillators set up in
different regions of the chip might be arranged to together create a more
complex periodic signal which could be further processed.

The other scenario is where the behavior of an individual flip flop, close
to its metastable state is exploited. In this way, some sort of pseudo
random output might be achievable. I am not so sure about this method, as
without fairly precise timing analysis, it might be quite hard to operate
the flip flop in the precise region of operation neccesary.

Any comments?

I would expect at least some comments as to this kind of design strategy
being quite improper, and random number generation being advisable on the
basis of algorithmic design. I'm not sure either way.

Thanks.




Article: 17471
Subject: Re: Semi-deterministic behaviour in FPGA's
From: Rickman <spamgoeshere4@yahoo.com>
Date: Fri, 30 Jul 1999 08:58:51 -0400
Links: << >>  << T >>  << A >>
Mark Grindell wrote:
> 
> Dear all,
> 
> I realize that a lot of effort has gone into the characterization of flip
> flops and timed circuits, in many cases for the sake of understanding the
> nature of metastable states. Of course, this is to allow design practices to
> reduce the probability of these states and to increase the probability of
> predictable behavior.
> 
> However, in some cases, it might be desirable to have non-deterministic
> behavior. One application for this might be the creation of random keys in
> an encryption circuit.
> 
> I can imagine two particular scenarios where this might be possible, and I
> wonder if anyone might care to comment on them.
> 
> It is possible to create cross-coupled structures in which two or more logic
> cells act as an oscillator. If the feedback path is fairly complex,
> distributed over several paths in the chip, the operating frequency might be
> quite a sensitive function of the operating temperature and the chip
> manufacturing batch number, and so on. Two such oscillators set up in
> different regions of the chip might be arranged to together create a more
> complex periodic signal which could be further processed.
> 
> The other scenario is where the behavior of an individual flip flop, close
> to its metastable state is exploited. In this way, some sort of pseudo
> random output might be achievable. I am not so sure about this method, as
> without fairly precise timing analysis, it might be quite hard to operate
> the flip flop in the precise region of operation neccesary.
> 
> Any comments?
> 
> I would expect at least some comments as to this kind of design strategy
> being quite improper, and random number generation being advisable on the
> basis of algorithmic design. I'm not sure either way.
> 
> Thanks.

I don't know that it would be improper to use hardware to generate
random numbers, but it is very difficult. When you need random numbers,
you need them to have certain properties such as being without bias. In
practice it is very difficult to build a true random number generator
which will not introduce a bias of some sort. I have known of several
attempts that did not work correctly because of this. One was based on a
diode generating thermal noise which was amplified and then converted to
Digital to produce a number. They found that the result was slightly
more likely to produce positive numbers vs. negative numbers. Every
other attempt I have heard of had similar problems. 

I believe I read where Intel was going to add such a feature to one of
their CPUs due out soon (or was it the PIII which is out now?) Anyone
heard of this?


-- 

Rick Collins

rick.collins@XYarius.com

remove the XY to email me.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 17472
Subject: Re: Semi-deterministic behaviour in FPGA's
From: Ray Andraka <randraka@ids.net>
Date: Fri, 30 Jul 1999 09:32:31 -0400
Links: << >>  << T >>  << A >>
Rather than using diodes,  async oscillators, metastability  etc for the random
number generation,  use one of these methods to seed an LFSR on start-up.  The
LFSR will give you a uniform distribution (well slightly biased toward '0'; the
all '1's state is not used) and good spectral uniformity if you take only one bit
per clock from it.  The uniformly distributed value can be transformed to another
distribution by a variety of techniques.  Make the LFSR long enough (more than 60
bits) so that it doesn't repeat in your lifetime, that will make the likelihood of
generating the same sequence very slim.  This also has the advantage of being
repeatable if you capture and store the starting seed.  One of the easier ways to
generate the seed is to use a time of day clock, which will avoid the problems
inherent in the other methods described in this thread.

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 17473
Subject: Re: Semi-deterministic behaviour in FPGA's
From: Ray Andraka <randraka@ids.net>
Date: Fri, 30 Jul 1999 09:38:24 -0400
Links: << >>  << T >>  << A >>
One more thing, if you use xilinx 4k,spartan or virtex, a long LFSR can be made with
very few CLBs by using the CLB RAM as a 16 bit shift register.  You can run these
little suckers at over 100MHz.  For example, I did a gaussian noise generator that
used 64 differently seeded LFSRs running at 80 MHz in a 4025E-2 a few years ago.  Yep,
the design had 64 129 bit LFSRs, plus an adder tree to add a bit from each one
(central limit theorem), plus a pair of 8x8 parallel multipliers for scaling.

Ray Andraka wrote:

> Rather than using diodes,  async oscillators, metastability  etc for the random
> number generation,  use one of these methods to seed an LFSR on start-up.  The
> LFSR will give you a uniform distribution (well slightly biased toward '0'; the
> all '1's state is not used) and good spectral uniformity if you take only one bit
> per clock from it.  The uniformly distributed value can be transformed to another
> distribution by a variety of techniques.  Make the LFSR long enough (more than 60
> bits) so that it doesn't repeat in your lifetime, that will make the likelihood of
> generating the same sequence very slim.  This also has the advantage of being
> repeatable if you capture and store the starting seed.  One of the easier ways to
> generate the seed is to use a time of day clock, which will avoid the problems
> inherent in the other methods described in this thread.
>
> --
> -Ray Andraka, P.E.
> President, the Andraka Consulting Group, Inc.
> 401/884-7930     Fax 401/884-7950
> email randraka@ids.net
> http://users.ids.net/~randraka



--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 17474
Subject: Re: Microcomputer buses for use inside FPGA/ASIC devices?
From: "Wade D. Peterson" <peter299@maroon.tc.umn.edu>
Date: Fri, 30 Jul 1999 08:46:47 -0500
Links: << >>  << T >>  << A >>
Hans <nospam_ees1ht@ee.surrey.ac.uk> wrote in message
news:7novor$3vd$1@info-server.surrey.ac.uk...
> There is also an asynchronous SoC Bus  called Marble. It is developed by
> Manchester University for the asynchronous Amulet processor family.
>
> Sorry no link,
>
> Hans.

Hi Hans:

Thanks for the info.  I don't think I've found an asynchronous bus yet.

I've added 'Marble' to my list of SoC buses at
http://www.silicore.net/uCbusum.htm under 'known, but not found'.  Maybe
somebody else has some information about this one.

I also looked around on the University of Manchester's website, but couldn't
find anything.  Their search engine seems to be broken, so I'll try later with
them.

By the way...I also checked Steve Guccione's list of FPGA based microcprocessors
for the 'Amulet' processor at: http://www.io.com/~guccione/HW_list.html  He
didn't list the Amulet processor.  By the way...if anybody is interested in FPGA
based microprocessors this is an excellent resource.

--
Wade D. Peterson
Silicore Corporation
3525 E. 27th St. No. 301, Minneapolis, MN USA 55406
TEL: (612) 722-3815, FAX: (612) 722-5841
URL: http://www.silicore.net/  E-MAIL: peter299@maroon.tc.umn.edu





Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search