Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 104550

Article: 104550
Subject: Re: RS232 transmitter core--Xilinx xapp223(Chapman's macro)
From: "Vivek Menon" <vivek.menon79@gmail.com>
Date: 29 Jun 2006 11:41:29 -0700
Links: << >>  << T >>  << A >>
Matter resolved.
Ken pointed me to the latest UART core being shipped with the Picoblaze
module for Virtex-II Pro designs. I had some trouble downloading the
picoblaze module, but thanks to Ed, the UART core is working
successfully on my board.
Thanks once again,
Vivek


Ray Andraka wrote:
> Aurelian Lazarut wrote:
>
> > run map with -ir (ignore RLOCs)
> > or if you use ISE, change map propreties by unchecking "use RLOC...."
> > Aurash
> >
>
> I don't believe that will work.  I think the RLOCs are still parsed with
> that switch set, in which case it will error out with the same complaint.


Article: 104551
Subject: Re: Synplify prepending Z's to top level signal names in Verilog
From: "Arnaud" <arivaton@gmail.com>
Date: 29 Jun 2006 11:47:54 -0700
Links: << >>  << T >>  << A >>
I've had the same error too, it comes when you have specifyed false
path constraints on some of your IO signals.

If using Synplify Pro with the ucf constraint file, a workaround is to
use the output ndf file as the input constraint file to the PnR tool.
Indeed, the signals <signal> renamed <signalz> in the netlist are also
renamed <signalz> in the ndf file. The modifyed names are then
transparent to the PnR tool (at least in Xilinx ngdbuild, map and par).

Personnally I have written a small sed script which changes back the
names in the netlist.

Hopes this helps.

jacob.bower@gmail.com wrote:
> Oh well, I guess I'm at the mercy of Synplicity and/or my perl skills.
>
> Thanks for trying though,
> - Jacob
>
> John_H wrote:
> > I tried a few workaround possibilities with no joy.  It looks like you're
> > stuck with the Z (or z).  \Literals, no.  Attribute syn_keep, no.  Attribute
> > syn_edif_scalar_format in three flavors, no.  IBUF instantiated, no.
> >
> > Sounds like it's time for an enhancement request!  I'd love to know if there
> > *was* a workaround outside of filtering the edif.
> >
> > <jacob.bower@gmail.com> wrote in message
> > news:1151521299.696601.186170@i40g2000cwc.googlegroups.com...
> > > Hi,
> > >
> > > Does anyone know of a way to stop Synplify from pre-pending a "Z" to
> > > names of top-level entity I/O signals which begin with an underscore
> > > ("_") when generating EDIF?
> > >
> > > Thanks.
> > > - Jacob
> > >


Article: 104552
Subject: Re: NCO Clock driven Designs in FPGA
From: Ben Jackson <ben@ben.com>
Date: Thu, 29 Jun 2006 13:50:34 -0500
Links: << >>  << T >>  << A >>
On 2006-06-29, rajeev <shuklrajeev@gmail.com> wrote:
> Can NCO
> output be used to drive the portion of the design ?

I got lazy and used a gated clock in a low-speed UART (9600 baud from a
100MHz sysclk).  Later I re-used it and wanted to ramp up the speed and
had all kinds of problems.  Finally I observed the UART behavior on the
scope and found that the baudclk was correct but the behavior of the
logic clocked by it was inconsistent, appearing to miss many edges.  Of
course I wasted a lot of time thinking my NCO was wrong before I realized
what was going on.  I turned it into a clock enable instead of a clk and
suddenly it worked fine (at 921600bps, even).

Next I plan to touch a hot stove to see if it's really a bad idea like
everyone says...

-- 
Ben Jackson
<ben@ben.com>
http://www.ben.com/

Article: 104553
Subject: Re: Altium Designer LiveDesign Evaluation Kits (once again)
From: Kolja Waschk <kawk@20060629.ixo.de>
Date: Thu, 29 Jun 2006 21:04:11 +0200
Links: << >>  << T >>  << A >>
Hi,

> * do the boards work with chip vendor software (ISE & Quartus) flows?

I own the Cyclone one and I'm very happy with it; can't tell about the
Xilinx version.

The Cyclone board has a 2x13 pin header to connect to a PC's parallel
port. It then behaves as if you used a ByteBlaster-compatible cable in
JTAG mode . It works with Alteras Quartus and NIOS IDE out of the box.
NIOS IDE complains that you should use an USB Blaster Rev.B for better
compatibility, but I haven't had any problems with the parallel cable.
With some rewiring you can attach an USB Blaster as well (see below).

> board bricked after 30 days??

No. 

>  re-program boards every time i turn them on?

Yes, there is no configuration device. The pads of the FPGA that would be
used to connect it to an EPCS device are tied to ground (and out of reach
under the BGA).

> * same cable for both boards? can i remove the cable while the board is
> on, so i can program the other board?

You can remove/switch the cable without interrupting operation on the
board.

> to summarize, can I buy these boards instead of ordering a "Starter
> kit" board from Xilinx and a "NIOS II" board from Altera? what would i
> be missing?

The kits from Altera usually have some more connectors, interfaces with
drivers (USB, Ethernet), RAM (SDRAM, not only SRAM) and Flash memory.
1MB of SRAM is okay if you want to run small programs or RTOS like uC/OS2
or RTEMS, but it isn't sufficient for larger OS like uClinux. 

I have a picture with a Cyclone board with an air-wired Ethernet PHY, USB
device interface (ISP1106-based, logic from OpenCores.org) and
USB-Blaster-compatible interface pictured here:

  http://www.ixo.de/info/usb_jtag/eb2_usbjtag_eth.jpg 

It runs RTEMS on NIOS2. But it has to be reconfigured from host after each
power up and therefore is really only suitable for development, not as a
standalone device. 

Regards,
Kolja

-- 
mr. kolja waschk - haubach-39 - 22765 hh - germany
fon +49 40 889130-34 - fax -35 - http://www.ixo.de



Article: 104554
Subject: Re: Problem to extend Xilinx GSRD Design
From: Ed McGettigan <ed.mcgettigan@xilinx.com>
Date: Thu, 29 Jun 2006 12:39:11 -0700
Links: << >>  << T >>  << A >>
MM wrote:
> Eric,
> 
> I am in the same boat. The workaround I found was to replace one of the
> plb_m1s1 cores with the standard plb_v34. So far this seems to have worked
> but I haven't finished the testing yet...
> 
> MPMC2 approach Ed mentioned would probably be a more natural approach but I
> didn't want to mess with replacing the memory controller as I wasn't sure it
> was fully compatible with the GSRD design...

The latest (June 1st) release of the MPMC2 code base includes the
equivalent of the original GSRD design using the new MPMC2 controller.

We haven't had time to update the GSRD page to note this release yet.
You want to start with project/ml403_ddr_idpoc_100mhz_gsrd.zip file.  The
"_idpoc_" part denotes that the design uses the following interfaces:

   i = ISPLB       (connects to PPC405 I-side PLB)
   d = DSPLB       (connects to PPC405 D-side PLB)
   p = PLB master  (connects to general PLB arbiter)
   o = OPB master  (connects to general OPB arbiter)
   c = CDMAC       (connects to the TEMAC)

You should really upgrade to the new code base there is a lot more that
you can do with this version.

Ed McGettigan
--
Xilinx Inc.

Article: 104555
Subject: How to evaluate the space efficiency of a historic design.
From: "Paul Marciano" <pm940@yahoo.com>
Date: 29 Jun 2006 12:42:18 -0700
Links: << >>  << T >>  << A >>
Before I start let me say I'm not sure this is either an intelligent
question nor an answerable one... so please be gentle.

I'm looking at implementing an 8-bit processor clone on an FPGA (purely
academic exercise - I know there are free IP cores available) and am
wondering how to judge the space efficiency of my design (as opposed to
speed efficiency).

According to numbers found on the web the MOS 6502 has 9000
transistors.

I haven't written a single line of RTL yet, but say I implemented a
100% functional equivalent in a 200K gate Spartan3, and it uses up 25%
of the resources...  How would you judge that?

Would you just take your own experience and say, "That's 3x too big...
try again".

Would knowing it can be done in 9000 custom placed transistors help at
all in judging the relative efficiency of the FPGA implementation?


Regards,
Paul.


Article: 104556
Subject: Re: How to evaluate the space efficiency of a historic design.
From: "Paul Marciano" <pm940@yahoo.com>
Date: 29 Jun 2006 12:50:08 -0700
Links: << >>  << T >>  << A >>
Just to follow up on my own post, Steve Knapp said, in 1995:

> This design is probably the venerable 6502 processor used in the Apple II.
> We have also implemented this design from VHDL.  It fits in 90% of a
> 6,000-gate Xilinx XC8106 FPGA.

That's a great data point, but the original question still stands:

> Would knowing it can be done in 9000 custom placed transistors help at
> all in judging the relative efficiency of the FPGA implementation?


Regards,
Paul.


Article: 104557
Subject: Re: How to evaluate the space efficiency of a historic design.
From: mk <kal*@dspia.*comdelete>
Date: Thu, 29 Jun 2006 20:01:51 GMT
Links: << >>  << T >>  << A >>
On 29 Jun 2006 12:42:18 -0700, "Paul Marciano" <pm940@yahoo.com>
wrote:

>Before I start let me say I'm not sure this is either an intelligent
>question nor an answerable one... so please be gentle.
>
>I'm looking at implementing an 8-bit processor clone on an FPGA (purely
>academic exercise - I know there are free IP cores available) and am
>wondering how to judge the space efficiency of my design (as opposed to
>speed efficiency).
>
>According to numbers found on the web the MOS 6502 has 9000
>transistors.
>
>I haven't written a single line of RTL yet, but say I implemented a
>100% functional equivalent in a 200K gate Spartan3, and it uses up 25%
>of the resources...  How would you judge that?
>
>Would you just take your own experience and say, "That's 3x too big...
>try again".
>
>Would knowing it can be done in 9000 custom placed transistors help at
>all in judging the relative efficiency of the FPGA implementation?
>
>
>Regards,
>Paul.

Let me do a rough calculation here:
9000 transistors assuming 6502 had no memory can be used to generate
2250 2 input NAND gates. S3200 has 4320 logic cells (1 flop + 1 lookup
table); assuming a 4 input look up table is around 3 nand2 and a flop
is around 5 nand2, I'd say it has 34560 equivalent nand2s so 25% would
be 8640 gates and that would be around 4x too big; again very roughly
and I am sure lots of people would disagree but I think it's a
reasonable starting point.

Article: 104558
Subject: Re: Xilinx BUFGMUX Setup Time requirement clarification needed
From: "Eric Crabill" <eric.crabill@xilinx.com>
Date: Thu, 29 Jun 2006 13:24:43 -0700
Links: << >>  << T >>  << A >>
Hello,

I compared the text in the Spartan-3E data sheet to that in the Spartan-3
data sheet.  Here's what the Spartan-3 data sheet had to say:

> The two clock inputs can be asynchronous with regard to each other,
> and the S input can change at any time, except for a short setup time
> prior to the rising edge of the presently selected clock (I0 or I1).
> Violating this setup time requirement can result in an undefined runt
> pulse output.

Looks like that information somehow got dropped when the Spartan-3E data
sheet was created, but you'll also notice the attempt to provide more
information about where to find the parameter.  You are right, it is Tgsi.

Eric

"Uwe Bonnes" wrote:
> e.g. the XC3SE Datasheet ds312 tells on page 59 in the Clock
> Buffers/Multiplexers  section:
>
> "As specified in DC and Switching Characteristics (Module 3), the select
> input has a setup time requirement."
>
> This is probaly Tgsi on page 139.
>
> What can happen if the setup time is not met ("End of the world as we knew
> it?:-)) ? Does the select signal need to be aligned to both input clock
> edges? If it needs to be aligned to both clocks, how does one achieve
> that. An if there are that harsh requirements on the select signal, what's
> the whole point in the BUFGMUX?
>
> Or does the select signal only needs to be aligned with the active
> edge. Simple latching the enable signal with the BUFGMUX output clock and
> feeding the latch output to the select of BUFGMUX would do the job (beside
> the case where the active clock is slow, where the time to the next clock
> would be needed before the clocks would switch.



Article: 104559
Subject: Pc and xcv200e doesn't talk,not exactly the right cable maybe..
From: "blisca" <blisca@tiscali.it>
Date: Thu, 29 Jun 2006 22:04:53 +0100
Links: << >>  << T >>  << A >>
I'm still here
thanks to this newsgroup i finally built a cable III that works fine with
cpld's like xc95144xl(3,3V),i can recognize it ,readback,erase,blank check
and write.
Then i dared to connect it with a (scraped as ever) fpga,a virtex xcv200e
but the boundary scan chain does not see it at all.

what i did was this :

i built a level shifter  for the TDO,because the cable is not expected to
work with such low levels,this 2 bjt level shifter works fine even with a 4
mhz square wave,and i think this is faster than any signal could ever move
through the parallel port(is it true?)

i connected just one 1.8V supply to the VCCINT of the fpga(pin A9),it is not
easy to test it (soldering wires on a bga is even worse ...)but it looks
that it should be enough for the core,correct???or i need to

i connected in 2 points the 3.3VCCO(B12 and A13) and the ground in 3
points(A1,J1,N12)

i connected the jtag signals,TCK,TDI,TMS,TDO(this one to the level
amplifier)

I connected PROGRAM fixed to 3.3V,then i tried to connect it with TMS,same
result.....

I left M0 and M2 open,and they are high,M1 tied to ground,this for choosing
boundary scan mode

using the debug chain utility i verified that the signals are working

Thank you to everyone in the group that will help me or just will read this

Diego




Article: 104560
Subject: Re: Problem to extend Xilinx GSRD Design
From: "MM" <mbmsv@yahoo.com>
Date: Thu, 29 Jun 2006 17:17:54 -0400
Links: << >>  << T >>  << A >>
Thanks a lot Ed! It would be nice if you at least put a notice on the GSRD
page. I've been waiting for this new release for quite a while... Anyways, I
would be still interested to know whether the fix I applied to the original
design would be expected to work?

Thanks,
/Mikhail



"Ed McGettigan" <ed.mcgettigan@xilinx.com> wrote in message
news:e81ac2$8q33@cliff.xsj.xilinx.com...
> MM wrote:
> > Eric,
> >
> > I am in the same boat. The workaround I found was to replace one of the
> > plb_m1s1 cores with the standard plb_v34. So far this seems to have
worked
> > but I haven't finished the testing yet...
> >
> > MPMC2 approach Ed mentioned would probably be a more natural approach
but I
> > didn't want to mess with replacing the memory controller as I wasn't
sure it
> > was fully compatible with the GSRD design...
>
> The latest (June 1st) release of the MPMC2 code base includes the
> equivalent of the original GSRD design using the new MPMC2 controller.
>
> We haven't had time to update the GSRD page to note this release yet.
> You want to start with project/ml403_ddr_idpoc_100mhz_gsrd.zip file.  The
> "_idpoc_" part denotes that the design uses the following interfaces:
>
>    i = ISPLB       (connects to PPC405 I-side PLB)
>    d = DSPLB       (connects to PPC405 D-side PLB)
>    p = PLB master  (connects to general PLB arbiter)
>    o = OPB master  (connects to general OPB arbiter)
>    c = CDMAC       (connects to the TEMAC)
>
> You should really upgrade to the new code base there is a lot more that
> you can do with this version.
>
> Ed McGettigan
> --
> Xilinx Inc.



Article: 104561
Subject: Re: Stopping the clock for power management
From: "Gabor" <gabor@alacron.com>
Date: 29 Jun 2006 14:38:23 -0700
Links: << >>  << T >>  << A >>

Ndf wrote:
> Hello,
>
> For a low power application I would like to stop the clock feed into a FPGA
> when enter  "sleep mode". This is a common practice or can be dangerous? And
> if is dangerous why? Maybe a silly question but I want to be sure about
> that! I use Lattice XP parts.
>
>
>
> Thanks,
>
> Dan.

Some things to consider:

How do you exit "sleep mode"?

Does this require the clock you're stopping?

If so does the clock signal still exist in a portion of the design?

Were you considering using the FPGA to stop its own clock or
use an external component?  It may not be easy to stop the clock
internally if you need to meet a certain phase relationship with
external parts.  Normally gating off a global clock will require
adding some logic between the clock input pin and the global
buffer (this would not be the case for parts with dynamic clock
select resources, such as EC/ECP - I'm not sure if XP has these).
It may be possible to fix phase problems with a PLL, but again
if you stop the input to the PLL you'll need to reset it when you
start the clock again.  DLL's have similar issues.  Also if you use
either of these, you'll have problems if you need to come up
operational within a few clock cycles of exiting "sleep mode".

Good Luck,
Gabor


Article: 104562
Subject: Re: How to evaluate the space efficiency of a historic design.
From: Jim Granville <no.spam@designtools.co.nz>
Date: Fri, 30 Jun 2006 09:43:23 +1200
Links: << >>  << T >>  << A >>
Paul Marciano wrote:

> Before I start let me say I'm not sure this is either an intelligent
> question nor an answerable one... so please be gentle.
> 
> I'm looking at implementing an 8-bit processor clone on an FPGA (purely
> academic exercise - I know there are free IP cores available) and am
> wondering how to judge the space efficiency of my design (as opposed to
> speed efficiency).
> 
> According to numbers found on the web the MOS 6502 has 9000
> transistors.
> 
> I haven't written a single line of RTL yet, but say I implemented a
> 100% functional equivalent in a 200K gate Spartan3, and it uses up 25%
> of the resources...  How would you judge that?
> 
> Would you just take your own experience and say, "That's 3x too big...
> try again".
> 
> Would knowing it can be done in 9000 custom placed transistors help at
> all in judging the relative efficiency of the FPGA implementation?

Not without taking a very large-jump.

Your best space efficiency measure, is to compare like-with-like,
so have a quick look at the free IP cores you mention, and note
their LUT counts, in the same FPGA you will be using.

Good ones would be PicoBlaze (all variants), PacoBlaze, Lattice Mico8,
as they are all mature, and optimised for FPGA deployment.

Commercial IP cores also often openly spec their LUT/MHz, so they can
also be used as yardsticks.

Now, you CAN compare your 8-bit LUT results in a meaningful way.

For Minimal-Opcode-Cores, these make interesting reading :

The venerable MC14500, a Boolean Industrial Control Unit.
[ the core fits in 7 macrcells, in a CPLD ]

The Maxim MAX1464 - remarkably similar opcodes, but 16 bit data space

The IEC 61131 IL language ( assembler-like )
Example here :
http://www.3s-software.com/index.shtml?CoDeSys_IL


-jg



Article: 104563
Subject: Re: Stopping the clock for power management
From: Jim Granville <no.spam@designtools.co.nz>
Date: Fri, 30 Jun 2006 10:42:40 +1200
Links: << >>  << T >>  << A >>
Ndf wrote:

 > Hello,
 >
 > For a low power application I would like to stop the clock feed into 
a FPGA
 > when enter  “sleep mode”. This is a common practice or can be 
dangerous? And
 > if is dangerous why? Maybe a silly question but I want to be sure about
 > that! I use Lattice XP parts.


You can stop the clock, and also reduce Vcc in some cases.

However, Static Icc on these new FPGAs can be a real killer!

Look at some of the new Power control Busses / Chips appearing, that are
designed to ramp the Vcc, as the clock scales.
Natsemi LP5550 is one example.
-jg


Article: 104564
Subject: Re: How to evaluate the space efficiency of a historic design.
From: "Tommy Thorn" <tommy.thorn@gmail.com>
Date: 29 Jun 2006 15:59:03 -0700
Links: << >>  << T >>  << A >>
mk wrote:
> Let me do a rough calculation here:
> 9000 transistors assuming 6502 had no memory can be used to generate
> 2250 2 input NAND gates. S3200 has 4320 logic cells (1 flop + 1 lookup
> table); assuming a 4 input look up table is around 3 nand2 and a flop
> is around 5 nand2, I'd say it has 34560 equivalent nand2s so 25% would
> be 8640 gates and that would be around 4x too big; again very roughly
> and I am sure lots of people would disagree but I think it's a
> reasonable starting point.

Well, when your basic building block is the transistor, you can
implement a lot more logic pr transistor than when it's just a NAND
gate.  Add to that the fact that setting 1 LUT = 3 NAND2 is really
unfair to the FPGA  as there will be lots of logic that don't come near
that utilization of the LUT.  Thus, I'd say that it's probably only
about 2x too big.

The real problem is the premise of comparing LUTs to transistors and
you can in fact do much better than that.  After mapping, you are told
much more detail of how resources were used, say how many were FF,
LUT2, LUT3, etc.  Make an estimate of how many transistors you would
need for each (say a LUT2 is somewhere between a NAND and an XOR).

Don't forget to keep the original 6502's interfaces the same if you
want an accurate model.

Tommy


Article: 104565
Subject: Re: Stopping the clock for power management
From: mikeandmax@aol.com
Date: 29 Jun 2006 16:22:18 -0700
Links: << >>  << T >>  << A >>
Ndf wrote:
> Hello,
>
> For a low power application I would like to stop the clock feed into a FPGA
> when enter  "sleep mode". This is a common practice or can be dangerous? And
> if is dangerous why? Maybe a silly question but I want to be sure about
> that! I use Lattice XP parts.
>
>
>
> Thanks,
  >
 > Dan.
Hi Dan -
this is actually a 2-parter -
     LatticeXP devices have DCS blocks(Dynamic Clock Selection) as part
of the global clock networks, and can be used to deactivate selected
clocks, reducing current for that clock domain.
     LatticeXP, and MachXO, both also offer 'Sleep Mode', in that these
devices have built-in Flash memory, for boot-up.  The SleepMode pin
deactivates the core power supply, and maintains voltage in the I/O,
which allows for very low power ( less than 100ua) standby  current.
When exiting 'sleep mode' the core supply is re-activated, and the
device then auto-boots from flash on chip.

     The marketing term is "INSTANT ON" but us FAEs prefer "less than 1
millisecond". :)

So, yes, you can disable clocks to save some power, while the device
remains active, or you can enter 'sleep mode' and save most of the
power.  Sleep Mode is available on the 'C' version of each device in
the family.  The 'C' version has an onboard regulator, so you can run
the device with a single 3.3v supply.  The core operates at 1.2v,
(derived from the on chip regulator in the 'C' devices), we require
3.3v for VccAUX(used for thresholding and some housekeeping circuitry),
 and the appropriate VccIO voltages.

thanks for aking -

Michael Thomas
Lattice SFAE - NY/NJ


Article: 104566
Subject: Re: Xilinx BUFGMUX Setup Time requirement clarification needed
From: "Peter Alfke" <peter@xilinx.com>
Date: 29 Jun 2006 16:36:07 -0700
Links: << >>  << T >>  << A >>
Asynchronously switching between two unrelated clock frequencies is an
interesting task. A simple solution is described as the last item in my
"Six Easy Pieces" TechXcusives of many years ago. That circuit,
however, has one problem: it cannot switch away from a dead clock.
No problem when both clocks run continuously, but still a limit to the
generality of the solution.
Avoiding that little problem is extremely difficult, and several
generations of BUFGMUX circuits have unfortunately destroyed the basic
concept of asynchronous control in the laudable attempt to cover the
Achilles heel. (Siegfrieds Lindenblatt for the Germans, same concept,
As we know, both heroes sadly died because of their tiny problem area).
If your clocks are always running, welcome to "Six Easy Pieces".
Peter Alfke, Xilinx
===========
Uwe Bonnes wrote:
> Hello,
>
> e.g. the XC3SE Datasheet ds312 tells on page 59 in the Clock
> Buffers/Multiplexers  section:
>
> "As specified in DC and Switching Characteristics (Module 3), the select
> input has a setup time requirement."
>
> This is probaly Tgsi on page 139.
>
> What can happen if the setup time is not met ("End of the world as we knew
> it?:-)) ? Does the select signal need to be aligned to both input clock
> edges? If it needs to be aligned to both clocks, how does one achieve
> that. An if there are that harsh requirements on the select signal, what's
> the whole point in the BUFGMUX?
>
> Or does the select signal only needs to be aligned with the active
> edge. Simple latching the enable signal with the BUFGMUX output clock and
> feeding the latch output to the select of BUFGMUX would do the job (beside
> the case where the active clock is slow, where the time to the next clock
> would be needed before the clocks would switch.
>
> Some clarification would be fine.
>
> --
> Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de
>
> Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
> --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------


Article: 104567
Subject: Re: Problem to extend Xilinx GSRD Design
From: Ed McGettigan <ed.mcgettigan@xilinx.com>
Date: Thu, 29 Jun 2006 16:55:02 -0700
Links: << >>  << T >>  << A >>
MM wrote:
> Thanks a lot Ed! It would be nice if you at least put a notice on the GSRD
> page. I've been waiting for this new release for quite a while... Anyways, I
> would be still interested to know whether the fix I applied to the original
> design would be expected to work?

If the workaround is what you described as "replace[ed] one of the
plb_m1s1 cores with the standard plb_v34" then it probably still
works. However, with the latest GSRD with MPMC2 design this isn't
needed at all as you can build a MPMC2 core with bridges to PLB and OPB
instead.  This should result in a smaller and faster design than what
you describe.

I agree that it would be a good thing to note on the GSRD landing page.
I will ping the appropriate people on this and see if we can get it
added.

Ed McGettigan
--
Xilinx Inc.

Article: 104568
Subject: Re: How to evaluate the space efficiency of a historic design.
From: "JJ" <johnjakson@gmail.com>
Date: 29 Jun 2006 17:12:12 -0700
Links: << >>  << T >>  << A >>

Paul Marciano wrote:
> Just to follow up on my own post, Steve Knapp said, in 1995:
>
> > This design is probably the venerable 6502 processor used in the Apple II.
> > We have also implemented this design from VHDL.  It fits in 90% of a
> > 6,000-gate Xilinx XC8106 FPGA.
>
> That's a great data point, but the original question still stands:
>
> > Would knowing it can be done in 9000 custom placed transistors help at
> > all in judging the relative efficiency of the FPGA implementation?
>
>
> Regards,
> Paul.

I'd ignore the transistor count, too meaningless. Since it was nmos, 1t
could have been used as a pass gate, a dotted NOR gate input, or even a
dynamic latch. I don't recall the design was static so dynamic nodes
were quite likely.

If you look at an old die picture of a 6502, you will see that the ALU
& datapath  is a big chunk of the chip like it was for all the 8
bitters, maybe 1/3 overall. The rest was the FSMs needed to make it all
work. I would hazard a guess by taking the no of known register bits in
the chip instruction set architecture documentation and tripling that
to include the actual FSMs.

I recall maybe 5x 8 bit,  2x 16b regs but thats OTOH, so rough total
around 200 FFs so its in the same ballpark as a beefed up Pico.

FWIW, if it takes any more than that, I'd look for a more FPGA friendly
design. if it actually has to run code and be completely correct, you
also have to include the known bugs exactly.

Its quite possible to build a far more powerful 32b PE with around 500
Luts not all FFs used.

John Jakson
transputer guy


Article: 104569
Subject: Re: How to evaluate the space efficiency of a historic design.
From: "M.Randelzhofer" <techseller@gmx.de>
Date: Fri, 30 Jun 2006 02:58:25 +0200
Links: << >>  << T >>  << A >>
"Paul Marciano" <pm940@yahoo.com> schrieb im Newsbeitrag
news:1151610138.590584.37570@b68g2000cwa.googlegroups.com...
> Before I start let me say I'm not sure this is either an intelligent
> question nor an answerable one... so please be gentle.
>
> I'm looking at implementing an 8-bit processor clone on an FPGA (purely
> academic exercise - I know there are free IP cores available) and am
> wondering how to judge the space efficiency of my design (as opposed to
> speed efficiency).
>
> According to numbers found on the web the MOS 6502 has 9000
> transistors.
>
> I haven't written a single line of RTL yet, but say I implemented a
> 100% functional equivalent in a 200K gate Spartan3, and it uses up 25%
> of the resources...  How would you judge that?
>
> Would you just take your own experience and say, "That's 3x too big...
> try again".
>
> Would knowing it can be done in 9000 custom placed transistors help at
> all in judging the relative efficiency of the FPGA implementation?
>
>
> Regards,
> Paul.
>

It all depends on logic optimizing.
The best optimization can be done at the transistor level.
So an FPGA implementation will use lots of overhead gates.
But in my estimation, several 6502's should fit into an XC3S200...

Should the 6502 core be clock cycle compatible ?

If yes, its an hard work:
E.g. connect an original 6502 to an FPGA, and develop your core by running
test programs concurrently on the original and your core. Detect all
anomalies, and correct them.
If your core works perfectly, study it, and throw it away.
Then start from scratch with optimizing the datapath and FSM's.

If not, you also can consider a picoblaze 6502 soft emulation for minimum
'gate count'.
The program space of an 18kbit blockram should be enough, or cascade several
picoblazes.

MIKE

-- 
www.oho-elektronik.de
OHO-Elektronik
Michael Randelzhofer
FPGA und CPLD Mini Module
Klein aber oho !




Article: 104570
Subject: Re: Altium Designer LiveDesign Evaluation Kits (once again)
From: Mark McDougall <markm@vl.com.au>
Date: Fri, 30 Jun 2006 11:10:39 +1000
Links: << >>  << T >>  << A >>
burn.sir@gmail.com wrote:

> I need two identical boards with Xilinx and Altera parts on them for
> some "fun" at home. I found this page on the net

If you buy the Nanoboard, you can get plug-in FPGA modules for both 
Altera and Xilinx devices. You also have on-board configuration flash, 
if you require that.

Using Altium's software, you can seamlessly target the same design to 
both Altera and Xilinx devices, something which I've actually done.

What sort of designs will you be "playing" with?

Regards,

-- 
Mark McDougall, Engineer
Virtual Logic Pty Ltd, <http://www.vl.com.au>
21-25 King St, Rockdale, 2216
Ph: +612-9599-3255 Fax: +612-9599-3266

Article: 104571
Subject: Re: Problem to extend Xilinx GSRD Design
From: "MM" <mbmsv@yahoo.com>
Date: Thu, 29 Jun 2006 21:42:14 -0400
Links: << >>  << T >>  << A >>
"Ed McGettigan" <ed.mcgettigan@xilinx.com> wrote in message
news:e81pbr$90u1@cliff.xsj.xilinx.com...
> If the workaround is what you described as "replace[ed] one of the
> plb_m1s1 cores with the standard plb_v34" then it probably still
> works.

Yes, that's what I did. I was just a little worried that the 1 master/1
slave restriction resulted from some MPMC design limitation...

> However, with the latest GSRD with MPMC2 design this isn't
> needed at all as you can build a MPMC2 core with bridges to PLB and OPB
> instead.  This should result in a smaller and faster design than what
> you describe.

This is all great and I am definitely going to switch to the new design; my
only problem at the moment is that I had to modify the ll_temac core to
support RGMII mode and now I will have to figure out whether I have to do it
all again or I can reuse my modified ll_temac core... Have you done anything
to the ll_temac?

Also, at some point in the near future I will need to have two TEMACs
connected to the same PPC... Could you tell me if this can be done in the
new MPMC2 based design?

Thanks,
/Mikhail





Article: 104572
Subject: Carry-chain based tapped delay line in Spartan3 - resolution? PVT variability?
From: "PeterC" <peter@geckoaudio.com>
Date: 29 Jun 2006 19:46:56 -0700
Links: << >>  << T >>  << A >>

I'm considering various options to implement a tapped delay line in an
S3 device.

I believe that using the carry chain (and travelling through adjacent
slices) would give a much finer resolution than going through the LUTs.

I would like to know what granularity I can expect for this type of
delay line ?

What is the MUX delay, local interconnect delay, process/voltage/temp
variability for each?
As far as I know, Xilinx publish worst case (max) LUT delay values
only. 

Any info greatly appreciated.

PeterC.


Article: 104573
Subject: Re: Carry-chain based tapped delay line in Spartan3 - resolution?
From: John_H <johnhandwork@mail.com>
Date: Fri, 30 Jun 2006 03:19:16 GMT
Links: << >>  << T >>  << A >>
PeterC wrote:
> I'm considering various options to implement a tapped delay line in an
> S3 device.
> 
> I believe that using the carry chain (and travelling through adjacent
> slices) would give a much finer resolution than going through the LUTs.
> 
> I would like to know what granularity I can expect for this type of
> delay line ?
> 
> What is the MUX delay, local interconnect delay, process/voltage/temp
> variability for each?
> As far as I know, Xilinx publish worst case (max) LUT delay values
> only. 
> 
> Any info greatly appreciated.
> 
> PeterC.
> 

In my Spartan3E starter kit (3s500E-4) I'm getting an average of 450 ps 
per LUT when using the fastest route between LUTs in a CLB.  The carry 
chain gets about 100 ps per tap.  The way I tend to use the delay is a 
broadside sample of the whole chain since muxing a signal back out tends 
to be up to the whim of the routing.

I do use a controlled injection of the source into the 8 LUTs at the 
bottom of my chain giving pretty strong repeatability there.  I used a 
method from XAPP 671 to get another half-LUT delay but I added it at the 
front end.  Bottom line: I have 0-7.5 LUTs worth of programmable delay 
averaging just over 200 ps for each half step along with about 100 ps 
per tap up the carry chain.

It was huge fun to do.  I recommend putting a frequency counter in your 
design to extract the precise timing.  It's great to see the results.

The changes over operating conditions aren't extreme.  I haven't seen 
the numbers on the shot of freeze spray but from what I observed early 
and the numbers I know now, I'd estimate less than 10% shift.  What I 
*have* seen that's cool is a strain-based change in delay: I push on the 
top of the chip with an eraser and the 5700 ps delay changes by about 
10-15 ps.  I considered the Spartan3E for use as a load cell!

Article: 104574
Subject: Re: Spartan3e starter kit vga mod
From: deunhido@gmail.com
Date: 29 Jun 2006 21:31:49 -0700
Links: << >>  << T >>  << A >>

MikeJ wrote:
> I have added some notes to www.fpgaarcade.com describing how to modify the
> Spartan kit board to have 12 bit RGB output on the VGA port.
>
> Also included is some test pattern code / bitfile to produce some output.
>
> /MikeJ

Would it also be possible to use a solution as in XAPP154, "Virtex
Synthesizable Delta-Sigma DAC"?

http://direct.xilinx.com/bvdocs/appnotes/xapp154.pdf

The discrete part would be three caps, maybe lodged in between pins
1-6, 2-7 and 3-8? Are there readily available caps that would fit? What
would be the best value to use with the 270 Ohm resistors on the board?

Brian




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search