Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 152550

Article: 152550
Subject: Re: Lattice XP2 getting hot and/or reading 0's as JTAG ID
From: Antti <antti.lukats@googlemail.com>
Date: Mon, 12 Sep 2011 10:00:48 -0700 (PDT)
Links: << >>  << T >>  << A >>
lattice reply, proper setting for no config:

CFG0=CFG1=INITN=0
PROGRAMN=1 (not clear if 1 or is enough to leave open?)

re JTAGID=0000000 we have some device that WORKS but has ID 0
and can not be reprogrammed. hmm and some devices that do not
work and get not be reprogrammed.

maybe some OTP bit got flashed. there is nothing in datasheet
about how JTAG ID reads if OTP is set, or if flash erase would
fail or not if OTP set.

Article: 152551
Subject: FPGA acceleration v.s. GPU acceleration
From: vcar <hitsx@163.com>
Date: Tue, 13 Sep 2011 20:50:39 -0700 (PDT)
Links: << >>  << T >>  << A >>
I was an FPGA engineer before and I think high performance computing
based FPGA will lead to a bright future. However through my recently
projects I found GPU will be more appropriate when there is a
acceleration need.

In embedded system, FPGA co-processing plan:
Intel E6x5C

and GPU co-processing plan:
AMD APU (with opencl support)

and in desktop system, FPGA co-processing plan:
Full custom design, mostly will be based on PCIe fabric

and GPU co-processing plan:
nVidia CUDA (with opencv basically support)

If I choose FPGA co-processing, the algorithm will be specifically
optimized and R&D time will be very noticeable. If I choose GPU plan,
algorithm migration will cost little time(even the original one is
Matlab code), and the acceleration performance will also be quite
well.

As a conclusion, the FPGA acceleration only suits some certain and
fixed application. However in the real world , many projects and many
algorithms are very uncertain and arbitrary. With same power
consumption, GPU plan  may lead better results. For a concrete
project, I will consider GPU or DSP, and FPGA at last.

Do everybody agree?

Article: 152552
Subject: Re: FPGA acceleration v.s. GPU acceleration
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Wed, 14 Sep 2011 04:53:23 +0000 (UTC)
Links: << >>  << T >>  << A >>
vcar <hitsx@163.com> wrote:

(snip)
> As a conclusion, the FPGA acceleration only suits some certain and
> fixed application. However in the real world , many projects and many
> algorithms are very uncertain and arbitrary. With same power
> consumption, GPU plan  may lead better results. For a concrete
> project, I will consider GPU or DSP, and FPGA at last.

There have been companier over the years selling FPGA based
accelerator hardware, but none have done very well.  

GPU acceleration takes advantage of the economy of scale for
graphical uses.  

FPGAs have not been very good for floating point, especially for
floating point addition.  

Some fixed point algorithms, such as dynamic programming and
convolution of very large data sets can possibly take advantage
of FPGA technology.  One problem that I know of requires 5e19
fixed point adds per day, or about 6e14 per second.  

> Do everybody agree?

I agree.

-- glen

Article: 152553
Subject: Re: FPGA acceleration v.s. GPU acceleration
From: "RCIngham" <robert.ingham@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com>
Date: Wed, 14 Sep 2011 05:46:18 -0500
Links: << >>  << T >>  << A >>
If what you need is a computation off-load engine for a standard CPU, with
that CPU handling all the I/O tasks, than using a GPU would probably be the
most appropriate implementation methodology.

However, the phrase "horses for courses" always applies.
	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 152554
Subject: Has anybody used IOB_DLY_ADJ with S(2:0) input?
From: Svenn Are Bjerkem <svenn.bjerkem@googlemail.com>
Date: Wed, 14 Sep 2011 09:01:56 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi,
I have a DDR input which every now and then gives me a nonfunctional implem=
entation due to unlucky input data clocking. Thought I would try to use the=
 variable delay input elements with the S control input, but I only got err=
or message from map that I did not connect to IO as expected.

I have instantiated the IOB_DLY_ADJ between the top level pin name recorded=
 in the ucf file and the input pin of the IDDR2 buffer in VHDL.

D >-- (I)IOB_DLY_ADJ(O) --> (D)IDDR2(Q0/Q1) =3D> internal DDR

Without the IOB_DLY_ADJ the design feeds data, but sometimes I am unlucky w=
ith my timing.=20

In the ucf file I have
NET "D" LOC =3D "AB2" |IOSTANDARD =3D "LVTTL" |IOBDELAY =3D "IFD";

According to the spartan3-hdl.pdf the delay buffer is supposed to be instan=
tiated, but maybe for some reason, the original IBUF is still used. I have =
not been able to find much more info on this in Xilinx docs.

--=20
Svenn

Article: 152555
Subject: The Manifest Destiny of Computer Architectures
From: Steve Richfield <steve.richfield.spam@gmail.com>
Date: Wed, 14 Sep 2011 12:54:58 -0700 (PDT)
Links: << >>  << T >>  << A >>
Fellow Architects,

At every computer conference I attend, I see numerous papers that show
how to incrementally increase the capabilities of present products,
plus a paper or two about some aspect of distant future processors.
There is a sort of consistency among these papers that, taken
together, creates an image of the manifest destiny of processors that
are VERY different from present-day processors and networks. I am
interested in that image, and I suspect that others here may also be
interested.

Here is the sort of image that I see emerging. Perhaps you have your
own very different vision?

1.  Processors would be able to automatically reconfigure around their
defects with such great facility that reject components will be nearly
eliminated. This would make it possible to build processors without
any practical limits to complexity. Several papers have been presented
explaining how this could be done with Genetic Algorithm (GA)
approaches. Initial reconfiguring would be done at manufacture, but
power-on reconfiguring would adapt to on-shelf and in-service
failures. Processors with large numbers of defects would be sold as
lesser performing processors.

2.  An operating system would distribute the work as tasks, with each
task having input and output vectors. Any task that fails to
successfully complete would be re-executed on other sections of the
processor while diagnostics identify the problem in the failed
section, which would then be reconfigured around the new defect. This
would allow systems to keep running and continue producing correct
results, despite run-time failures.

3.  Memory would be integral to the CPU, and would be in the form of
thousands (or millions) of small memory banks that would eliminate the
memory bus bottleneck. Switched memory buses could quickly move blocks
of data around.

4.  The processor would be organized as a small (2-4) number of CPUs,
each having a large number of sub-processors capable of dynamic
reconfiguration to specialize in the computation at hand. That
reconfiguration would be capable of the extensive data-chaining needed
to execute complex loops as single instructions, and do so in just a
few machine cycles, after suitable setup. Sub-processors would
probably be reconfigurable for either SIMD or MIMD operation.

5.  The system would probably use asynchronous logic extensively, not
only for its asynchronous capabilities, but also for its inherent
ability to automatically recognize its own malfunctions and trigger
reconfiguration.

6.  A new language with APL-like semantics would allow programmers to
state their wishes at a high enough level for compilers to determine
the low-level method of execution that best matches the particular
hardware that is available to execute it.

7.  There are other items on this list, but they aren=92t as easy to
explain, and they may not be essential to achieve the manifest destiny
of processors.

Note that the Billions of dollars now spent on developing GPU-based
and large network-based processors, along with the software to run on
them will have been WASTED as soon as Manifest Destiny processors
become available. Further, the personnel who fail to quickly make the
transition to Manifest Destiny processors will probably become
permanently unemployed, as has happened at various past points of
major architectural inflection.

Apparently the only conference around with a sufficiently broad
interest and attendance to host discussions at this level is
WORLDCOMP. This would provide a peer reviewed avenue of legitimation
for Manifest Destiny research. I have talked with Hamid, the General
Chairman, about hosting these discussions, and he is OK with it,
providing that I can drum up enough interest. So, I need to determine
the level of interest out there in a more distant future of computing
that lies beyond just the next product.

Conferences aside, please email me or post your level of interest, and
please pass this on to any others you know who might be interested.

1.  I am interested as a lurker.

2.  I am interested in participating in on-line discussions.

3.  I am interested in attending a conference.

4.  I am interested in presenting at a conference.

5.  I am interested in chairing and/or helping any way I can.

6.  I am a major player with some money to help advance this cause.

Thanks for your help.

Steve dot Richfield at gmail dot com

Article: 152556
Subject: Re: The Manifest Destiny of Computer Architectures
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Wed, 14 Sep 2011 21:06:16 +0000 (UTC)
Links: << >>  << T >>  << A >>
In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote:

> At every computer conference I attend, I see numerous papers that show
> how to incrementally increase the capabilities of present products,
> plus a paper or two about some aspect of distant future processors.
> There is a sort of consistency among these papers that, taken
> together, creates an image of the manifest destiny of processors that
> are VERY different from present-day processors and networks. I am
> interested in that image, and I suspect that others here may also be
> interested.

I am reading in comp.arch.fgpa, but comp.arch readers may have
different ideas.

> Here is the sort of image that I see emerging. Perhaps you have your
> own very different vision?

> 1.  Processors would be able to automatically reconfigure around their
> defects with such great facility that reject components will be nearly
> eliminated. This would make it possible to build processors without
> any practical limits to complexity. Several papers have been presented
> explaining how this could be done with Genetic Algorithm (GA)
> approaches. Initial reconfiguring would be done at manufacture, but
> power-on reconfiguring would adapt to on-shelf and in-service
> failures. Processors with large numbers of defects would be sold as
> lesser performing processors.

Reminds me of stories about Russian processors that came with
a 'bad instruction' list the way disk drives (used to) come with
a bad blocks list.  

If you follow such conferences, you necessarily get far-out ideas.
But if you look at the actual processors in use today, they
are not so different from 40 years ago.  Bigger and faster, yes,
but otherwise not that different.

> 2.  An operating system would distribute the work as tasks, with each
> task having input and output vectors. Any task that fails to
> successfully complete would be re-executed on other sections of the
> processor while diagnostics identify the problem in the failed
> section, which would then be reconfigured around the new defect. This
> would allow systems to keep running and continue producing correct
> results, despite run-time failures.

I suppose there are some problems that could work that way.  
A web browser updating multiple windows on a page could farm out
each to a different task.  But many computational problems don't
divide up that way.

> 3.  Memory would be integral to the CPU, and would be in the form of
> thousands (or millions) of small memory banks that would eliminate the
> memory bus bottleneck. Switched memory buses could quickly move blocks
> of data around.

> 4.  The processor would be organized as a small (2-4) number of CPUs,
> each having a large number of sub-processors capable of dynamic
> reconfiguration to specialize in the computation at hand. That
> reconfiguration would be capable of the extensive data-chaining needed
> to execute complex loops as single instructions, and do so in just a
> few machine cycles, after suitable setup. Sub-processors would
> probably be reconfigurable for either SIMD or MIMD operation.

Very few problems divide up that way.  For those that do, static
reconfiguration is usually the best choice.  Dynamic reconfiguration
is fun, but most often doesn't seem to work well with real problems.

> 5.  The system would probably use asynchronous logic extensively, not
> only for its asynchronous capabilities, but also for its inherent
> ability to automatically recognize its own malfunctions and trigger
> reconfiguration.

> 6.  A new language with APL-like semantics would allow programmers to
> state their wishes at a high enough level for compilers to determine
> the low-level method of execution that best matches the particular
> hardware that is available to execute it.

APL hasn't been popular over the years, and it could have done
most of this for a long time.  On the other hand, you might look
at the ZPL language.  Not as high-level, but maybe more practical.

> 7.  There are other items on this list, but they aren???t as easy to
> explain, and they may not be essential to achieve the manifest destiny
> of processors.

> Note that the Billions of dollars now spent on developing GPU-based
> and large network-based processors, along with the software to run on
> them will have been WASTED as soon as Manifest Destiny processors
> become available. Further, the personnel who fail to quickly make the
> transition to Manifest Destiny processors will probably become
> permanently unemployed, as has happened at various past points of
> major architectural inflection.

Consider that direct decendant of the 35 year old Z80 are still
very popular, among others in many calculators and controllers.
New developments might be used for certain problems, but the old
problems can be handled just fine with older processors.

For many years now, the economy of scale of people buying
faster processors to browse the web or run spreadsheets has
supplied computational sciences (computational physics, computational
chemistry, and computational biology) with cheap, fast machines.
Machines that wouldn't have had sufficient economy of scale
without those other uses.  The whole idea behind GPU processors
is that the economy of scale of building graphics engines for
gamers can also be used for computational science.

> Apparently the only conference around with a sufficiently broad
> interest and attendance to host discussions at this level is
> WORLDCOMP. This would provide a peer reviewed avenue of legitimation
> for Manifest Destiny research. I have talked with Hamid, the General
> Chairman, about hosting these discussions, and he is OK with it,
> providing that I can drum up enough interest. So, I need to determine
> the level of interest out there in a more distant future of computing
> that lies beyond just the next product.

Consider the latest deviation from traditional processor design,
the VLIW Itanium.  VLIW has been around for years, and never did
very well.  Some thought its time had come, but it is sinking just
like the similarly named boat.

> Conferences aside, please email me or post your level of interest, and
> please pass this on to any others you know who might be interested.

-- glen

Article: 152557
Subject: Xilinx Tin Whiskers ?
From: Jon Elson <jmelson@wustl.edu>
Date: Wed, 14 Sep 2011 16:23:27 -0500
Links: << >>  << T >>  << A >>
I think I mentioned this problem a year or so ago, but have new data.
We previously had problems with whiskers shorting adjacent pins on some
boards that
have a Xilinx XC9572-15TQG100C part.  These whiskers were laying flat on
the board, so their origin was not completely clear.

Now, I have some boards that were reflow soldered some months ago, and
were only finished now.  On inspection of the CPLD, clear evidence of
Tin whisker growth is obvious.  I think EVERY chip has whisker growth
on at least one pin!  This is quite a concern, as this equipment may
have a 20 year operating life.

There are 12 other fine-pitch parts on this board, and none of those
show signs of the whiskers.

I reported the first occurrence to Xilinx at the time, including
microphotographs,
and they basically blew me off, saying it was obviously my process.
We are using tin-lead solder paste on tin-lead plated boards.

Does anyone have any idea why we are experiencing this, or what can be
done to prevent these chips from developing shorts over time?

Thanks,

Jon

Article: 152558
Subject: Re: Xilinx Tin Whiskers ?
From: Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de>
Date: Wed, 14 Sep 2011 21:54:46 +0000 (UTC)
Links: << >>  << T >>  << A >>
Jon Elson <jmelson@wustl.edu> wrote:
...
> I reported the first occurrence to Xilinx at the time, including
> microphotographs,

Please put the pictures on the web
...
-- 
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Article: 152559
Subject: Re: The Manifest Destiny of Computer Architectures
From: Stefan Monnier <monnier@iro.umontreal.ca>
Date: Wed, 14 Sep 2011 18:08:04 -0400
Links: << >>  << T >>  << A >>
> If you follow such conferences, you necessarily get far-out ideas.
> But if you look at the actual processors in use today, they
> are not so different from 40 years ago.  Bigger and faster, yes,
> but otherwise not that different.

Actually, if you look back at "far out research" from years ago, I think
that even though the machines we use are "like the ones from back then"
from a programming point of view, they are also in some ways "like the
far-out ideas from back then".

E.g. the "Processor In Memory" still hasn't happened, but current CPUs
have a boat load of on-chip memory.  So I think the way to predict the
future is to take those far-out ideas and try to see "how will future
engineers manage to use such techniques while still running x86/ARM
code".  After all, experience shows that the part that's harder to
change is the software, despite its name.


        Stefan

Article: 152560
Subject: Re: Xilinx Tin Whiskers ?
From: Jon Elson <jmelson@wustl.edu>
Date: Wed, 14 Sep 2011 17:11:31 -0500
Links: << >>  << T >>  << A >>
On 09/14/2011 04:54 PM, Uwe Bonnes wrote:
> Jon Elson<jmelson@wustl.edu>  wrote:
> ...
>> I reported the first occurrence to Xilinx at the time, including
>> microphotographs,
>
> Please put the pictures on the web
> ...
They are really crummy, and show the "old" problem, some whisker-like 
strands thay lay across the board.  This new condition is different, and
shows REALLY typical-looking tin whiskers that are growing out of the
bends of the gull-wing leads on these Xilinx QFP parts.  The last time
I tried photographing this, I got very mediocre results, the stereo
zoom microscope setup we have is optimized for hand rework of parts,
and the light level decreases as you increase magnification.  So, 
although I can see what is going on quite clearly, I doubt the pictures 
would be very definitive.  But, I have NO doubt, whatsoever, that what I 
am seeing NOW matches the published tin whisker photos that are 
ubiquitous on the web.

What has me worried is these are essentially new boards, just going 
through testing before sending out to researchers who will be using them 
for a number of years.  If I saw this amount of whisker growth in the 
six months these boards have been in storage after reflow, it may
indicate a LOT of problems in the future.  It has definitely gotten me 
worried!

(As for posting this as a reply to another thread, my first post as a new
thread was rejected by some news server, but I could not discern the 
reason for the rejection.)

Jon

Article: 152561
Subject: Re: The Manifest Destiny of Computer Architectures
From: Quadibloc <jsavard@ecn.ab.ca>
Date: Wed, 14 Sep 2011 15:47:00 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 14, 4:08=A0pm, Stefan Monnier <monn...@iro.umontreal.ca> wrote:

> E.g. the "Processor In Memory" still hasn't happened, but current CPUs
> have a boat load of on-chip memory. =A0So I think the way to predict the
> future is to take those far-out ideas and try to see "how will future
> engineers manage to use such techniques while still running x86/ARM
> code". =A0After all, experience shows that the part that's harder to
> change is the software, despite its name.

My main question concerning the original poster's projection of the
logical future for processors is that reconfigurability comes with a
large amount of overhead. So if leaving reconfigurability out improves
speed by a factor of 3, say, it won't be popular.

However, that's only true if the reconfigurability is fine-grained, as
on an FPGA. On something like IBM's recent PowerPC chip with 18 CPUs,
where one of them can be marked as bad, so that it uses one for
supervision and 16 for work, there is almost no overhead.

So, just as larger caches are the present-day form of memory on the
chip, coarse-grained configurability will be the way to increase
yields, if not the way to progress to that old idea of wafer-scale
integration. (That was, of course, back in the days of three-inch
wafers. Fitting an eight-inch wafer into a convenient consumer
package, let alone dealing with its heat dissipation, hardly bears
thinking about.)

John Savard

Article: 152562
Subject: Can't get the Xilinx cable drivers installed on SL6.1 (RHEL 6.1)
From: General Schvantzkoph <schvantzkoph@yahoo.com>
Date: 14 Sep 2011 22:51:14 GMT
Links: << >>  << T >>  << A >>
Has anyone been able to get Impact or Chipscope working on SL6.1/CentOS6/
RHEL6?

It failed with the xsetup GUI but it only gave a useless error message 
that it failed in the log.

When I tried to run the install script in

LabTools/LabTools/bin/lin64/install_script/install_drivers

I got a bunch of compile errors, apparently it's incompatible with a 
2.6.32 kernel. 

I also couldn't find libusb-driver in 13.2, the most recent copy that I 
had was in 10. 


Article: 152563
Subject: Re: Xilinx Tin Whiskers ?
From: Jim Granville <j.m.granville@gmail.com>
Date: Wed, 14 Sep 2011 15:53:58 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sep 15, 10:11=A0am, Jon Elson <jmel...@wustl.edu> wrote:
> =A0This new condition is different, and
> shows REALLY typical-looking tin whiskers that are growing out of the
> bends of the gull-wing leads on these Xilinx QFP parts. =A0

Interesting location, suggests stress helps ?

Are these on both bends, & on the compression, or tension part of
each ?

Were these manually or reflow soldered ?
At Lead-based, or Lead-free temperatures ? Post cleaned or not ?

-jg

Article: 152564
Subject: Re: Can't get the Xilinx cable drivers installed on SL6.1 (RHEL 6.1)
From: Steve <theecobs@gmail.com>
Date: Thu, 15 Sep 2011 12:42:21 +1000
Links: << >>  << T >>  << A >>
On 09/15/2011 08:51 AM, General Schvantzkoph wrote:
> Has anyone been able to get Impact or Chipscope working on SL6.1/CentOS6/
> RHEL6?
>
> It failed with the xsetup GUI but it only gave a useless error message
> that it failed in the log.
>
> When I tried to run the install script in
>
> LabTools/LabTools/bin/lin64/install_script/install_drivers
>
> I got a bunch of compile errors, apparently it's incompatible with a
> 2.6.32 kernel.
>
> I also couldn't find libusb-driver in 13.2, the most recent copy that I
> had was in 10.
>

I had a similar problem getting my Xilinx USB-JTAG cable working on 
Fedora 13.  I ended up using the open source Linux driver instead, works 
fine:

http://rmdir.de/~michael/xilinx/

Steve Ecob
Silicon On Inspiration
Sydney Australia


Article: 152565
Subject: Re: Can't get the Xilinx cable drivers installed on SL6.1 (RHEL
From: General Schvantzkoph <schvantzkoph@yahoo.com>
Date: 15 Sep 2011 03:02:23 GMT
Links: << >>  << T >>  << A >>
On Thu, 15 Sep 2011 12:42:21 +1000, Steve wrote:

> On 09/15/2011 08:51 AM, General Schvantzkoph wrote:
>> Has anyone been able to get Impact or Chipscope working on
>> SL6.1/CentOS6/ RHEL6?
>>
>> It failed with the xsetup GUI but it only gave a useless error message
>> that it failed in the log.
>>
>> When I tried to run the install script in
>>
>> LabTools/LabTools/bin/lin64/install_script/install_drivers
>>
>> I got a bunch of compile errors, apparently it's incompatible with a
>> 2.6.32 kernel.
>>
>> I also couldn't find libusb-driver in 13.2, the most recent copy that I
>> had was in 10.
>>
>>
> I had a similar problem getting my Xilinx USB-JTAG cable working on
> Fedora 13.  I ended up using the open source Linux driver instead, works
> fine:
> 
> http://rmdir.de/~michael/xilinx/
> 
> Steve Ecob
> Silicon On Inspiration
> Sydney Australia

I've already tried to build the libusb-driver but it won't build on SL6.1. 

Article: 152566
Subject: Re: The Manifest Destiny of Computer Architectures
From: Mark Thorson <nospam@sonic.net>
Date: Wed, 14 Sep 2011 20:53:48 -0800
Links: << >>  << T >>  << A >>
Steve Richfield wrote:
> 
> 1.  Processors would be able to automatically reconfigure around their
> defects with such great facility that reject components will be nearly
> eliminated. This would make it possible to build processors without
> any practical limits to complexity. Several papers have been presented
> explaining how this could be done with Genetic Algorithm (GA)
> approaches. Initial reconfiguring would be done at manufacture, but
> power-on reconfiguring would adapt to on-shelf and in-service
> failures. Processors with large numbers of defects would be sold as
> lesser performing processors.

Defect density is hardly a limiting factor.  Thermal
and I/O are, both also being packaging and substrate
issues.  Also, it would introduce pain if different
chips with the same part number, revision level, and
date code had different performance.  Probably no fun
for the guys in the testing department, either.

I'm reminded of a friend of mine that worked on
binary code rehosting tools for Clipper.  He'd rant
and rave about all the hardware bugs being hidden
by the assembler.  When I told him that I learned
from this newsgroup that yield was being enhanced
by zapping individual bad cache lines to make them
permanently invalid, he just laughed.

Article: 152567
Subject: Re: The Manifest Destiny of Computer Architectures
From: Mark Thorson <nospam@sonic.net>
Date: Wed, 14 Sep 2011 20:58:28 -0800
Links: << >>  << T >>  << A >>
glen herrmannsfeldt wrote:
> 
> In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote:
> 
> > Note that the Billions of dollars now spent on developing GPU-based
> > and large network-based processors, along with the software to run on
> > them will have been WASTED as soon as Manifest Destiny processors
> > become available. Further, the personnel who fail to quickly make the
> > transition to Manifest Destiny processors will probably become
> > permanently unemployed, as has happened at various past points of
> > major architectural inflection.
> 
> Consider that direct decendant of the 35 year old Z80 are still
> very popular, among others in many calculators and controllers.
> New developments might be used for certain problems, but the old
> problems can be handled just fine with older processors.

8051 and PIC architecture hardware and software engineers
are still gainfully employed, perhaps more now than ever
before.  Maybe he was referring to the 6502?

Article: 152568
Subject: Re: The Manifest Destiny of Computer Architectures
From: Mark Thorson <nospam@sonic.net>
Date: Wed, 14 Sep 2011 21:02:09 -0800
Links: << >>  << T >>  << A >>
Quadibloc wrote:
> 
> So, just as larger caches are the present-day form of memory on the
> chip, coarse-grained configurability will be the way to increase
> yields, if not the way to progress to that old idea of wafer-scale
> integration. (That was, of course, back in the days of three-inch
> wafers. Fitting an eight-inch wafer into a convenient consumer
> package, let alone dealing with its heat dissipation, hardly bears
> thinking about.)

Oh, sure it does.  Just have four of them on the top
of the box, put it in the kitchen, and call it a stove.

Article: 152569
Subject: Re: The Manifest Destiny of Computer Architectures
From: nmm1@cam.ac.uk
Date: Thu, 15 Sep 2011 06:24:11 +0100 (BST)
Links: << >>  << T >>  << A >>
In article <j4r508$rgl$1@speranza.aioe.org>,
glen herrmannsfeldt  <gah@ugcs.caltech.edu> wrote:
>In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote:
>
>> 5.  The system would probably use asynchronous logic extensively, not
>> only for its asynchronous capabilities, but also for its inherent
>> ability to automatically recognize its own malfunctions and trigger
>> reconfiguration.

This is a meaning of the term "asynchronous" with which I was
previously unfamiliar.

>> 6.  A new language with APL-like semantics would allow programmers to
>> state their wishes at a high enough level for compilers to determine
>> the low-level method of execution that best matches the particular
>> hardware that is available to execute it.
>
>APL hasn't been popular over the years, and it could have done
>most of this for a long time.  On the other hand, you might look
>at the ZPL language.  Not as high-level, but maybe more practical.

Its status as the leading write-only language has been taken
over by Perl; despite the claims of its proponents, it never was
exceptionally useful for scientific calculations or anything else.
Also, its model is a fair match to the computers that were being
fantastised about in the 1980s, rather than the 2000s.


I am very much in favour of people doing serious future thinking,
but it would have to be a lot better-informed and hard-headed,
and preferably more radical, than this.

For example, there are people starting to think about genuinely
unreliable computation, of the sort where you just have to live
with ALL parths being unreliable.  After all, we all use such a
computer every day ....


Regards,
Nick Maclaren.

Article: 152570
Subject: Re: Can't get the Xilinx cable drivers installed on SL6.1 (RHEL
From: Jan Pech <invalid@void.domain>
Date: Thu, 15 Sep 2011 08:31:17 +0200
Links: << >>  << T >>  << A >>
Is there any particular reason to compile your own libusb instead of
using the distribution packages?

To make the Xilinx JTAG cable working in the RHEL/CentOS/SL 6.x do the
following stops. There is detailed description on my website
http://www.sensor-to-image.cz/doku.php?id=3Deda:xilinx but unfortunately
it is in Czech language only. Sorry.

1. Install and "fix" libusb:

yum install libusb libusb1 fxload
cd /usr/lib64 (or /usr/lib if you are running 32b system)
ln -s libusb-1.0.so.0.0.0 libusb.so

2. "Fix" the Xilinx cable setup script
<xilinx_install_dir>/ISE_DS/ISE/bin/lin64/setup_pcusb (or the same path
with lin instead of lin64) which does not detect udev correctly:

# Use udev always
#TP_USE_UDEV=3D"0"
#TP_UDEV_ENABLED=3D`ps -e | grep -c udevd`
TP_USE_UDEV=3D"1"
TP_UDEV_ENABLED=3D"1"

3. Run the script from its directory:

cd <xilinx_install_dir>/ISE_DS/ISE/bin/lin64/setup_pcusb (or lin instead
of lin64)
./setup_pcusb

4. Generated udev rule uses wrong syntax. The rule for current version
of udev /etc/udev/rules.d/xusbdfwu.rules must look like this (long lines
must be retained, see my website for proper formatting):

# version 0003
ATTR{idVendor}=3D=3D"03fd", ATTR{idProduct}=3D=3D"0008", MODE=3D"666"
SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i=
dProduct}=3D=3D"0007", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusbdfw=
u.hex -D $tempnode"
SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i=
dProduct}=3D=3D"0009", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xu=
p.hex -D $tempnode"
SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i=
dProduct}=3D=3D"000d", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_em=
b.hex -D $tempnode"
SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i=
dProduct}=3D=3D"000f", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xl=
p.hex -D $tempnode"
SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i=
dProduct}=3D=3D"0013", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xp=
2.hex -D $tempnode"
SUBSYSTEM=3D=3D"usb", ACTION=3D=3D"add", ATTR{idVendor}=3D=3D"03fd", ATTR{i=
dProduct}=3D=3D"0015", RUN+=3D"/sbin/fxload -v -t fx2 -I /usr/share/xusb_xs=
e.hex -D $tempnode"

5. Connect/reconnect your cable, check dmesg, test iMPACT/ChipScope.

Regards,
Jan


Sorry if the post appears twice. I had some problems posting the
message.



On Thu, 2011-09-15 at 03:02 +0000, General Schvantzkoph wrote:
> On Thu, 15 Sep 2011 12:42:21 +1000, Steve wrote:
>=20
> > On 09/15/2011 08:51 AM, General Schvantzkoph wrote:
> >> Has anyone been able to get Impact or Chipscope working on
> >> SL6.1/CentOS6/ RHEL6?
> >>
> >> It failed with the xsetup GUI but it only gave a useless error message
> >> that it failed in the log.
> >>
> >> When I tried to run the install script in
> >>
> >> LabTools/LabTools/bin/lin64/install_script/install_drivers
> >>
> >> I got a bunch of compile errors, apparently it's incompatible with a
> >> 2.6.32 kernel.
> >>
> >> I also couldn't find libusb-driver in 13.2, the most recent copy that =
I
> >> had was in 10.
> >>
> >>
> > I had a similar problem getting my Xilinx USB-JTAG cable working on
> > Fedora 13.  I ended up using the open source Linux driver instead, work=
s
> > fine:
> >=20
> > http://rmdir.de/~michael/xilinx/
> >=20
> > Steve Ecob
> > Silicon On Inspiration
> > Sydney Australia
>=20
> I've already tried to build the libusb-driver but it won't build on SL6.1=
.=20



Article: 152571
Subject: CONSTRAINTS
From: "varun_agr" <VARUN_AGR@n_o_s_p_a_m.n_o_s_p_a_m.YAHOO.COM>
Date: Thu, 15 Sep 2011 01:52:04 -0500
Links: << >>  << T >>  << A >>
Sir
When we run our vhdl programme it sysnthesize and implemented witout any
error, In place and route report it gives  as:

Generating Clock Report
**************************

+---------------------+--------------+------+------+------------+-------------+
|        Clock Net    |   Resource   |Locked|Fanout|Net Skew(ns)|Max
Delay(ns)|
+---------------------+--------------+------+------+------------+-------------+
|           clk_BUFGP |     BUFGMUX0P| No   | 1886 |  0.280     |  1.257   
  |
+---------------------+--------------+------+------+------------+-------------+
|               s_clk |     BUFGMUX5S| No   |  174 |  0.205     |  1.238   
  |
+---------------------+--------------+------+------+------------+-------------+
|      hyperdis/h_clk |     BUFGMUX2P| No   |   38 |  0.273     |  1.250   
  |
+---------------------+--------------+------+------+------------+-------------+
|              s_clk1 |         Local|      |   62 |  0.213     |  2.535   
  |
+---------------------+--------------+------+------+------------+-------------+
|   trpcnt_cmp_eq0000 |         Local|      |   18 |  0.000     |  1.360   
  |
+---------------------+--------------+------+------+------------+-------------+

* Net Skew is the difference between the minimum and maximum routing
only delays for the net. Note this is different from Clock Skew which
is reported in TRCE timing report. Clock Skew is the difference between
the minimum and maximum path delays which includes logic delays.

Timing Score: 232989

INFO:Timing:2761 - N/A entries in the Constraints list may indicate that
the constraint does not cover any paths or that it has no
   requested value.
Asterisk (*) preceding a constraint indicates it was not met.
   This may be due to a setup or hold violation.

------------------------------------------------------------------------------------------------------
  Constraint                                |  Check  | Worst Case |  Best
Case | Timing |   Timing   
                                            |         |    Slack   |
Achievable | Errors |    Score   
------------------------------------------------------------------------------------------------------
  Autotimespec constraint for clock net clk | SETUP   |         N/A|   
45.997ns|     N/A|           0
  _BUFGP                                    | HOLD    |     0.543ns|       
    |       0|           0
------------------------------------------------------------------------------------------------------
  Autotimespec constraint for clock net hyp | SETUP   |         N/A|   
20.484ns|     N/A|           0
  erdis/h_clk                               | HOLD    |     0.658ns|       
    |       0|           0
------------------------------------------------------------------------------------------------------
* Autotimespec constraint for clock net s_c | SETUP   |         N/A|   
11.313ns|     N/A|           0
  lk1                                       | HOLD    |    -2.837ns|       
    |     124|      232989
------------------------------------------------------------------------------------------------------


1 constraint not met.
INFO:Timing:2761 - N/A entries in the Constraints list may indicate that
the 
   constraint does not cover any paths or that it has no requested value.


Generating Pad Report.

All signals are completely routed.
Sir I want to know what's meaning of 1 constraint not met and how I resolve
it.
Thanks
Varun 	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 152572
Subject: Re: Xilinx Tin Whiskers ?
From: nico@puntnl.niks (Nico Coesel)
Date: Thu, 15 Sep 2011 07:18:35 GMT
Links: << >>  << T >>  << A >>
Jon Elson <jmelson@wustl.edu> wrote:

>I think I mentioned this problem a year or so ago, but have new data.
>We previously had problems with whiskers shorting adjacent pins on some
>boards that
>have a Xilinx XC9572-15TQG100C part.  These whiskers were laying flat on
>the board, so their origin was not completely clear.
>
>Now, I have some boards that were reflow soldered some months ago, and
>were only finished now.  On inspection of the CPLD, clear evidence of
>Tin whisker growth is obvious.  I think EVERY chip has whisker growth
>on at least one pin!  This is quite a concern, as this equipment may
>have a 20 year operating life.
>
>There are 12 other fine-pitch parts on this board, and none of those
>show signs of the whiskers.
>
>I reported the first occurrence to Xilinx at the time, including
>microphotographs,
>and they basically blew me off, saying it was obviously my process.
>We are using tin-lead solder paste on tin-lead plated boards.
>
>Does anyone have any idea why we are experiencing this, or what can be
>done to prevent these chips from developing shorts over time?

My guess is that you'll need to look at the temperature profile of the
soldering process. I'd get some lead-free soldering experts to look at
the problem.

-- 
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------

Article: 152573
Subject: Re: The Manifest Destiny of Computer Architectures
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Thu, 15 Sep 2011 07:32:30 +0000 (UTC)
Links: << >>  << T >>  << A >>
In comp.arch.fpga nmm1@cam.ac.uk wrote:

(snip)
>>In comp.arch.fpga Steve Richfield <steve.richfield.spam@gmail.com> wrote:

>>> 5.  The system would probably use asynchronous logic extensively, not
>>> only for its asynchronous capabilities, but also for its inherent
>>> ability to automatically recognize its own malfunctions and trigger
>>> reconfiguration.

> This is a meaning of the term "asynchronous" with which I was
> previously unfamiliar.

It does seem a little unusual.  Asynchronous logic, sometimes also
known as self-timed logic, has been around for years.  Some is
described in: http://en.wikipedia.org/wiki/Asynchronous_logic

I suppose I believe that some failure modes could be detected
and a corrective action initiated.  

-- glen

Article: 152574
Subject: Re: CONSTRAINTS
From: "RCIngham" <robert.ingham@n_o_s_p_a_m.n_o_s_p_a_m.gmail.com>
Date: Thu, 15 Sep 2011 04:32:49 -0500
Links: << >>  << T >>  << A >>
The OP has a duplicate thread at:
http://forums.xilinx.com/t5/Timing-Analysis/Constraint-not-met-in-Place-and-Route-Report/m-p/177534
	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search