Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 16425

Article: 16425
Subject: Request FAQ
From: "Rob Putala" <rcp@frontiernet.net>
Date: Fri, 21 May 1999 08:41:17 -0400
Links: << >>  << T >>  << A >>
Where can I obtain a FAQ for this newsgroup?  I would like to advise
subscribers of opportunities I am working on, but do not want to upset or
get flamed.

Any help with the rules will be appreciated!  Also, any suggestions on other
newsgroups will be welcomed.

--
rcp@frontiernet.net
F-O-R-T-U-N-E Personnel Consultants of Worcester
(978) 466 - 6800


Article: 16426
Subject: CFP: FPGA 2000
From: herman@galant.ece.cmu.edu (Herman Schmit)
Date: 21 May 1999 13:17:36 GMT
Links: << >>  << T >>  << A >>

                                 FPGA 2000
                              Call for Papers

       Eighth ACM International Symposium on Field-Programmable Gate Arrays

                            Monterey, California
                            February 10-11, 2000

The annual ACM/SIGDA International Symposium on Field-Programmable
Gate Arrays is the premier conference for presentation of advances in
all areas related to FPGA technology.  For FPGA 2000, we are
soliciting submissions describing novel research and development in
the following (and related) areas of interest:

* FPGA Architecture: Logic block & routing architectures, I/O
  structures and circuits, new commercial architectures,
  Field-Programmable Interconnect Chips and Devices (FPIC/FPID),
  Field-Programmable Analog Arrays (FPAA).

* CAD for FPGAs: Placement, routing, logic optimization, technology
  mapping, system-level partitioning, logic generators, testing and
  verification. CAD for FPGA-based accelerators.

* Applications: Innovative use of FPGAs, exploitation of FPGA
  features, novel circuits, high-performance and
  low-power/mission-critical applications, DSP techniques, uses of
  reconfiguration, FPGA-based cores.

* FPGA-based computing engines: Compiled accelerators, reconfigurable
  computing, adaptive computing devices, systems and software.

* Rapid-prototyping: Fast prototyping for system level design,
  Multi-Chip Modules (MCMs), logic emulation.

Authors are invited to submit PDF of their paper (12 pages maximum) by
October 1, 1999 via E-mail to hauck@ece.nwu.edu.  Notification of
acceptance will be sent by November 17, 1999.  The authors of the
accepted papers will be required to submit the final camera-ready copy
by December 1, 1999.  A proceedings of the accepted papers will be
published by ACM, and included in the Annual ACM/SIGDA CD-ROM
Compendium publication.  Address questions to:

Scott Hauck
Program Chair, FPGA 2000
Dept. of ECE, Northwestern University
2145 Sheridan Rd
Evanston, IL  60208
Phone: (847) 467-1849  Fax: (847) 467-4144    hauck@ece.nwu.edu

General Chair: Steve Trimberger, Xilinx	
Program Chair: Scott Hauck, Northwestern U.
Finance Chair: Sinan Kaptanoglu
Publicity Chair: Herman Schmit, CMU

Program Committee:
Miron Abramovici, Lucent                  David Lewis, U. of Toronto
Ray Andraka, Andraka Consulting		  Fabrizio Lombardi, Northeastern U.
Mike Bershteyn, Quickturn		  Wayne Luk, Imperial College
Michael Butts, Synopsys			  Margaret Marek-Sadowska, UCSB
Jason Cong, UCLA			  Jan Rabaey, UCB
Eugene Ding, Lucent			  Jonathan Rose, U. of Toronto
Carl Ebeling, U. of Washington		  Martine Schlag, UCSC
Reiner Hartenstein, U. Kaiserslautern	  Herman Schmit, CMU
Scott Hauck, Northwestern U.		  Tim Southgate, Altera
Brad Hutchings, BYU			  Russ Tessier, U. Mass. - Amherst
Sinan Kaptanoglu, Actel			  Steve Trimberger, Xilinx
Tom Kean, Algotronix			  John Wawrzynek, UCB
Martin Wong, UT at Austin

Sponsored by ACM SIGDA, with support from Altera, Lucent, Actel and Xilinx.

Please visit the web site <http://www.ece.cmu.edu/~fpga2000> for more
information.


Article: 16427
Subject: Re: How synthesize tools concern with size of the design?
From: Magnus Homann <d0asta@palver.dtek.chalmers.se>
Date: 21 May 1999 17:20:20 +0200
Links: << >>  << T >>  << A >>
David Pashley <David@edasource.com> writes:
> 
> How does the fact that Synopsys goes from strength to strength, and is
> unarguably both successful and competitive (see their website, Q2 sales
> 21% up at $190m) square with your comments?
> 

Do you sell Synopsys products?

Homann
-- 
   Magnus Homann  Email: d0asta@dtek.chalmers.se
                  URL  : http://www.dtek.chalmers.se/DCIG/d0asta.html
  The Climbing Archive!: http://www.dtek.chalmers.se/Climbing/index.html
Article: 16428
Subject: Re: How synthesize tools concern with size of the design?
From: David Pashley <David@edasource.com>
Date: Fri, 21 May 1999 17:55:57 +0100
Links: << >>  << T >>  << A >>
In article <ltpv3utpor.fsf@palver.dtek.chalmers.se>, Magnus Homann
<d0asta@palver.dtek.chalmers.se> writes
>David Pashley <David@edasource.com> writes:
>> 
>> How does the fact that Synopsys goes from strength to strength, and is
>> unarguably both successful and competitive (see their website, Q2 sales
>> 21% up at $190m) square with your comments?
>> 
>
>Do you sell Synopsys products?
>
Indirectly, yes. Among other activities, we sell and support Viewlogic
products which include an OEM of FPGA Express.

However, I was trying to substantiate a point (that while you might
debate FPGA Express, DC is *clearly* competitive), not promote Synopsys.

What I'm trying to work out is why the criticisms levelled at FPGA
Express in terms of language don't also affect DC (or maybe they do?).

-- 
David Pashley                    <
 ---------------------------  <  <  <  --- mailto:david@edasource.com
| Direct Insight Ltd       <  <  <  <  >   Tel: +44 1280 700262      |
| http://www.edasource.com    <  <  <      Fax: +44 1280 700577      |
 ------------------------------  <  ---------------------------------
Article: 16429
Subject: Re: High Speed Reconfigurability
From: brian_n_miller@yahoo.com
Date: Fri, 21 May 1999 17:28:42 GMT
Links: << >>  << T >>  << A >>
sc@vcc.com wrote:
>
> Software engineers are starting to
> say "you can do what?" ...  The goal, of course, is
> to not even know whether you have a reconfigurable
> computer or not and just be able to code away as usual.

So which is it?  Heads up to the software developers, or not?


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
Article: 16430
Subject: Re: High Speed Reconfigurability
From: brian_n_miller@yahoo.com
Date: Fri, 21 May 1999 17:32:49 GMT
Links: << >>  << T >>  << A >>
rolandpj@bigfoot.com wrote:
>
> I like to to view the problem as an extension of HotSpot/JIT.
> ... Why not do the same thing, but right down to the hardware?

Reliability.  If an FPGA fails to reconfigure itself properly,
then how to recover?


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---
Article: 16431
Subject: Re: How synthesize tools concern with size of the design?
From: "Andy Peters" <apeters@noao.edu.NOSPAM>
Date: Fri, 21 May 1999 10:33:17 -0700
Links: << >>  << T >>  << A >>
David Pashley wrote in message <4PYc7IAHxRR3MA7i@edasource.com>...
>In article <37446aa8.4459222@news.dial.pipex.com>, ems@riverside-
>machines.com.NOSPAM writes
>>there is absolutely *no way* that fpga express is competitive in terms
>>of vhdl language support, and i'm sure that you know this as well as
>>the rest of us do. true, there have been major advances in both DC and
>>express recently, but you've still got some way to go.
>>
>>on the other hand, there have been some really nice additions in 3.1 -
>>FST/scripting and attribute passing primarily, which would make this a
>>very competitive tool if you got the language support right.
>>
>>evan
>
>As I understand it, and as you suggest, the language support in DC and
>FPGA Express is the same.
>
>How does the fact that Synopsys goes from strength to strength, and is
>unarguably both successful and competitive (see their website, Q2 sales
>21% up at $190m) square with your comments?


They're the Microsoft of the synthesis market.

>Do you mean that the language requirements for FPGA are totally
>different from those for ASIC?


No, he means that Synopsys' support of VHDL differs from the standard.  For
instance, their whole idea of putting the std_logic_arith/unsigned/signed
stuff into the ieee library, rather than calling it the synopsys library.
And the lack of real VHDL'93 support (in 1999, no less).


-- a
------------------------------------------
Andy Peters
Sr. Electrical Engineer
National Optical Astronomy Observatories
950 N Cherry Ave
Tucson, AZ 85719
apeters@noao.edu

"Space, reconnaissance, weather, communications - you name it. We use space
a lot today."
-- Vice President Dan Quayle



Article: 16432
Subject: Re: Xilinx M1.5 Crash
From: Bret Wade <bret.wade@xilinx.com>
Date: Fri, 21 May 1999 11:49:24 -0600
Links: << >>  << T >>  << A >>
This is a multi-part message in MIME format.
--------------156FB254376CE33DF2CF18D4
Content-Type: text/plain; charset=us-ascii
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Content-Transfer-Encoding: 7bit

Adam J. Elbirt wrote:

> Terry,
>
> That's interesting.  I downloaded SP1 and 2 from the web site
> specifically
> from the 1.5 area, not the 1.5i area.  When I talked to Xilinx support
> they
> claimed that the service packs were fine for 1.5 and that they had
> never heard
> of the page faulting problem.  They're still working on it so
> hopefully we'll
> here from them soon.
>
> Adam
>

Hello Adam,

The Service Packs for version 1.5i are not compatible with version 1.5.
Version 1.5 did have an SP1, but not an SP2, so it appears that you may
have installed 1.5i SP2 into a 1.5 environment which will not work. I
suggest installing 1.5i and applying the 1.5i SP2 update.

BTW, I was unable to find an open hotline case for you in the call
tracking system. Please send me the case number and I'll check into it.

Regards,
Bret Wade
Xilinx Product Applications


--------------156FB254376CE33DF2CF18D4
Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Bret  Wade
Content-Disposition: attachment; filename="vcard.vcf"

begin:          vcard
fn:             Bret  Wade
n:              Wade;Bret 
org:            Xilinx
email;internet: bret.wade@xilinx.com
title:          Senior Engineer Product Applications
x-mozilla-cpt:  ;0
x-mozilla-html: TRUE
end:            vcard


--------------156FB254376CE33DF2CF18D4--

Article: 16433
Subject: Re: High Speed Reconfigurability
From: Tim Tyler <tt@cryogen.com>
Date: Fri, 21 May 1999 17:55:44 GMT
Links: << >>  << T >>  << A >>
Roland Paterson-Jones <rpjones@hursley.ibm.com> wrote:

[HotSpot...]

: Now, why not do the same thing, but right down to the hardware, rather than
: down to machine code. What you need, however, is a general compiler from a
: high-level language (Java bytecode?) to fpga gates. [...]

This would probably wind up being a rather poor way of exploiting the
power of the available hardware ;-/

While it's /possible/ to write multi-threaded Java code that's capable of
executing concurrently, threaded code is less common than it might be,
partly due to it being harder to write.

Apart from it's thread support, Java's pretty inherently serial.  The
thought of attempting to extract parallelism from such a serial stream of
instructions usually makes me feel nauseous.

While I'm all in favour of a portable, universal means of describing
algorithms that enables them to be implemented efficiently on parallel
hardware, unfortunately, Java doesn't look /remotely/ like what I
envisage.

Java's ability to exploit parallelism (aside from threading code)
really isn't very good.  You even typically have to initialise an array of
objects using a "for" loop.  Writing a smart compiler to examine loops to
see if this sort of thing is occurring seems to me to be a backwards
approach: a /sensible/ concurrent language should allow this type of
parallelism to be explicitly indicated, rather than expecting advanced AI
techniques to extract the information about when it can and can't be done
from the code.

Extending Java's collection frameworks to allow operations to be performed
on all elements simultaneously, and some other more "functional"
bits-and-pieces might help with concurrency issues - but picking something
else entirely, something /without/ such a strong serial legacy would
appear to be a much more sensible idea to me.

I have other concerns, apart from speeding up Java's execution speed -
I want something capable of reasonably efficient exploitation of the
available hardware.

Personally, I'd like an approach based fairly directly on circuit design,
/preferably/ something that would allow construction of an n-dimensional
circuit from an original, higher-level representation.  I can't really see
how a written, language-based, serial HDL would be very suitable as a
basis for this.

I envisage something like a "rubber circuit" which 'stretches' to fit
the characteristics of the target hardware, while retaining the relevant
inter-component distances and ratios, for purposes of retaining correct
synchronous operation.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  tt@cryogen.com

A clean disc is a sign of a sick computer.
Article: 16434
Subject: Re: High Speed Reconfigurability
From: Steven Casselman <sc@vcc.com>
Date: Fri, 21 May 1999 12:01:15 -0700
Links: << >>  << T >>  << A >>


Roland Paterson-Jones wrote:

>
>
> I like to to view the problem as an extension of HotSpot/JIT technologies
> in virtual machine implementation, most notably lately in Java Virtual
> Machines. What these technologies do is profile a program on-the-fly (with
> embedded profiling instructions, or through interpretation, or whatever).
> When they determine that a certain portion of a program is heavily
> exercised, then that portion is (re-)compiled with heavy optimisation.

Hey thats what my patent is about.
The patent examiner actually quoted "generation" from the dictionary.
http://www.patents.ibm.com/details?pn=US05684980__&s_clms=1#20?pn=US05684980__&s_clms=1

Look at claim 26.

>
>
> Now, why not do the same thing, but right down to the hardware, rather than
> down to machine code. What you need, however, is a general compiler from a
> high-level language (Java bytecode?) to fpga gates. According to the empty
> response to a previous posting, nobody is interested in such a thing.

Macro-Based Hardware Compilation of Java
Bytecodes into a Dynamic Reconfigurable
Computing System.
Cardoso et al. FCCM 99.

>
>
> What you also need is a sense of when this would be more useful than merely
> compiling to the machine-code level. I'd be inclined to think that it's
> almost always useful, since you can parallelize as much as the source code
> will allow for each specific case, which will always be better than a fixed
> super-scalar processor architecture.

It all depends on the bandwidth at which you can
supply the RC unit (in most cases).

>
>
> Now, each thread of your program will have a different hotspot footprint,
> so when you do a context switch at the software (thread) level, you switch
> your gate array for the hardware-implemented hotspots of the new thread.

That is what most companies do.  Take the bit stream and convert it
into a static array.  It can then be complied directly into byte codes
or the software image.

>
>
> I believe this approach simplifies things, and also truly unifies hardware
> and software, since the programmer is entirely unaware of what's going on
> under the hood. That's what we want in the end, isn't it - the software is
> the machine.
>
> I have dreams of a single multitasked fpga doing all of the stuff that each
> separate component of a motherboard + cards currently does (or an array of
> fpga's multi-tasking this). Cheap and fast and simple (once you've
> implemented the JIT technology!).
>
> Roland

One day (I know it is heresy) all computer will be made of
FPGA like devices and there will be no CPUs or any of the
current day structures.  When quantum dots or whatever
comes along and you have a tera gate system with giga bytes
of memory diffused through out the system you won't need
a pentium 23. In fact it will be too hard to design a CPU
make it testable and get it into production. It will take 20 yr
just to do the test vectors. No, in the end everything will be
reconfigurable and advances in computing will come
at the reconfigurable cell/architecture level,  the
algorithm level and the algorithm expression (HLL)
level.

--
Steve Casselman, President
Virtual Computer Corporation
http://www.vcc.com


Article: 16435
Subject: Re: High Speed Reconfigurability
From: Steven Casselman <sc@vcc.com>
Date: Fri, 21 May 1999 13:16:08 -0700
Links: << >>  << T >>  << A >>


George wrote:

> Hi Steve.
>
> Any idea if this is available on-line anywhere?  I looked
> around on www.fccm.org but did not see it.
> > Macro-Based Hardware Compilation of Java
> > Bytecodes into a Dynamic Reconfigurable
> > Computing System.
> > Cardoso et al. FCCM 99.

I can't find it anywhere (except in the
pre-proceedings) Joao M P Cardoso
INSEC/University of Algarve
and Horacio C Neto
{Joao.Cardoso, hcn}@inesc.pt


--
Steve Casselman, President
Virtual Computer Corporation
http://www.vcc.com


Article: 16436
Subject: Re: High Speed Reconfigurability
From: schuerig@acm.org (Michael Schuerig)
Date: Fri, 21 May 1999 22:21:30 +0200
Links: << >>  << T >>  << A >>
Roland Paterson-Jones <rpjones@hursley.ibm.com> wrote:

> > What you also need is a sense of when this would be more useful than merely
> > compiling to the machine-code level. I'd be inclined to think that it's
> > almost always useful, since you can parallelize as much as the source code
> > will allow for each specific case, which will always be better than a fixed
> > super-scalar processor architecture.
> >
> > Now, each thread of your program will have a different hotspot footprint,
> > so when you do a context switch at the software (thread) level, you switch
> > your gate array for the hardware-implemented hotspots of the new thread.

I have no clue about FPGAs, but, I'm wondering, aren't the switching
times prohibitive? Remember, you not only have to switch the hardware
around for thread switches in your own process, but also for context
switches among all the processes running on the hardware. Remember, you
don't own the processor.

Obviously, the switching circuitry takes up space on the chip. Maybe
it's more efficient to spend this space on more processing logic.

Another thing is, and here I'm indeed curious, what are the basic
hardware building blocks you want to arrange in a thread-specific
fashion? The finer the granularity, the costlier the reprogramming.

Michael

-- 
Michael Schuerig
mailto:schuerig@acm.org
http://www.schuerig.de/michael/
Article: 16437
Subject: Re: High Speed Reconfigurability
From: Joao Manuel Paiva Cardoso <Joao.Cardoso@inesc.pt>
Date: Fri, 21 May 1999 21:48:00 +0100
Links: << >>  << T >>  << A >>
Hi,

I have put the paper (postscript format) in:
http://esda.inesc.pt/~jmpc

If you have any problem with it, just send me an email and I will
send you the paper.

I am working on a Web page describing the project. Soon, more
information
will be available. Let me know if you want that I send you an email when
the page
is completed.

Thanks for the interest.

Regards,

Joao Cardoso


Steven Casselman wrote:

> George wrote:
>
> > Hi Steve.
> >
> > Any idea if this is available on-line anywhere?  I looked
> > around on www.fccm.org but did not see it.
> > > Macro-Based Hardware Compilation of Java
> > > Bytecodes into a Dynamic Reconfigurable
> > > Computing System.
> > > Cardoso et al. FCCM 99.
>
> I can't find it anywhere (except in the
> pre-proceedings) Joao M P Cardoso
> INSEC/University of Algarve
> and Horacio C Neto
> {Joao.Cardoso, hcn}@inesc.pt
>
> --
> Steve Casselman, President
> Virtual Computer Corporation
> http://www.vcc.com



--
*******************************************
Joao Manuel Paiva Cardoso
INESC/ESDA Group      phone: +351 1 3100288
Email: Joao.Cardoso@inesc.pt



--------------987CE1625DEE2DF93A5DCAD0--

Article: 16438
Subject: R: High Speed Reconfigurability
From: "Italian Cowboy" <gmeardi@geocities.com>
Date: Fri, 21 May 1999 22:54:51 +0200
Links: << >>  << T >>  << A >>
Well, that's one of the reasons why we called our chip PoliMorph: I was
inspired by Morph, a particular JIT profiling technology that does more or
less what you described. It's not easy, though, not at all, overall when
you're talking about hardware reconfiguration.
There are a number of serious hardware hurdles that will probably prevent
FPGA coupled processors from conquering the general purpose market: first
and foremost, exceptions and interrupts' handling is a *real* problem,
secondly handling a multitasking system would present an array of trade off.
But I'm pretty sure that the artificial intelligence my grandchildren will
be quite accustomed to will be able to rewire itself (what about
reconfigurable neural nets? We've always talked about them, now we could
implement them).

Guido


Roland Paterson-Jones <rpjones@hursley.ibm.com> wrote in message
374517F1.6649758F@hursley.ibm.com...
> Jonathan Feifarek wrote:
>
> > the problem is difficult
> > because there are too many degrees of freedom right now.
>
> I like to to view the problem as an extension of HotSpot/JIT technologies
> in virtual machine implementation, most notably lately in Java Virtual
> Machines. What these technologies do is profile a program on-the-fly (with
> embedded profiling instructions, or through interpretation, or whatever).
> When they determine that a certain portion of a program is heavily
> exercised, then that portion is (re-)compiled with heavy optimisation.
>
> Now, why not do the same thing, but right down to the hardware, rather
than
> down to machine code. What you need, however, is a general compiler from a
> high-level language (Java bytecode?) to fpga gates. According to the empty
> response to a previous posting, nobody is interested in such a thing.
>
> What you also need is a sense of when this would be more useful than
merely
> compiling to the machine-code level. I'd be inclined to think that it's
> almost always useful, since you can parallelize as much as the source code
> will allow for each specific case, which will always be better than a
fixed
> super-scalar processor architecture.
>
> Now, each thread of your program will have a different hotspot footprint,
> so when you do a context switch at the software (thread) level, you switch
> your gate array for the hardware-implemented hotspots of the new thread.
>
> I believe this approach simplifies things, and also truly unifies hardware
> and software, since the programmer is entirely unaware of what's going on
> under the hood. That's what we want in the end, isn't it - the software is
> the machine.
>
> I have dreams of a single multitasked fpga doing all of the stuff that
each
> separate component of a motherboard + cards currently does (or an array of
> fpga's multi-tasking this). Cheap and fast and simple (once you've
> implemented the JIT technology!).
>
> Roland
>
>


Article: 16439
Subject: [Fwd: High Speed Reconfigurability]
From: Steven Casselman <sc@vcc.com>
Date: Fri, 21 May 1999 14:28:08 -0700
Links: << >>  << T >>  << A >>
This is a multi-part message in MIME format.
--------------987CE1625DEE2DF93A5DCAD0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit



--
Steve Casselman, President
Virtual Computer Corporation
http://www.vcc.com


--------------987CE1625DEE2DF93A5DCAD0
Content-Type: message/rfc822
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

Return-Path: <Joao.Cardoso@inesc.pt>
Received: from mail.vcc.com (mail.vcc.com [192.168.0.21])
	by vcc.com (8.8.5/8.8.5) with ESMTP id OAA16412
	for <sc@mail.vcc.com>; Fri, 21 May 1999 14:01:28 -0700
Received: from popd.vcc.com
	by mail.vcc.com (fetchmail-4.5.0 POP3)
	for <sc/mail.vcc.com> (single-drop); Fri, 21 May 1999 14:01:38 PDT
Received: by multi10.netcomi.com for sc
 (with Netcom Interactive pop3d (v1.21.1 1998/05/07) Fri May 21 20:54:12 1999)
X-From_: jmpc@esda.inesc.pt  Fri May 21 15:48:16 1999
Received: from inesc.inesc.pt (inesc.inesc.pt [146.193.0.1]) by multi10.netcomi.com (8.8.5/8.7.4) with SMTP id PAA25473 for <sc@vcc.com>; Fri, 21 May 1999 15:48:16 -0500
Received: from esda.inesc.pt (hobbes.inesc.pt) by inesc.inesc.pt with SMTP;
	id AA22095 (/); Fri, 21 May 1999 21:48:02 +0100
Received: from hobbes.inesc.pt (localhost) by esda.inesc.pt (4.1/OSversion)
	id AA18885; Fri, 21 May 99 21:48:01 +0100
Sender: Joao.Cardoso@inesc.pt
Article: 16440
Subject: The Economist article: "Hardware goes soft"
From: "Jan Gray" <jsgray@acm.org.nospam>
Date: Fri, 21 May 1999 23:35:16 GMT
Links: << >>  << T >>  << A >>
"Computer chips that can rewire themselves to perform different functions
are starting to take the “hard” out of hardware"

See http://www.economist.com/editorial/freeforall/22-5-99/index_st9188.html.

Jan Gray



Article: 16441
Subject: Xilinx device readback through parallel port
From: Ivan <ticica@sympatico.ca>
Date: Sat, 22 May 1999 04:02:09 GMT
Links: << >>  << T >>  << A >>

  I was wondering if anyone here attempted to make a device that would
perform the readback  of Xilinx 4K FPGAs through s parallel port. I know
about Xchecker cable, but it seems to be good for smaller devices only.
  Another question about Virtex readback. I haven't managed to find any
info on the site. Does anyone know if they have some documentation on
the topic and where can I get it?

  Thanks,
  Ivan Hamer

Article: 16442
Subject: Re: High Speed Reconfigurability
From: "Roland PJ Pipex Account" <rolandpj@bigfoot.com>
Date: Sat, 22 May 1999 10:18:43 +0100
Links: << >>  << T >>  << A >>

Michael Schuerig wrote in message <1ds6977.1ng0im2yreirkN@[192.168.1.2]>...
>
>I have no clue about FPGAs, but, I'm wondering, aren't the switching
>times prohibitive?
>
Watch this space;-) If there's a good argument for fast-switching hardware,
then fast switching hardware is what there'll be!
>
>Obviously, the switching circuitry takes up space on the chip. Maybe
>it's more efficient to spend this space on more processing logic.
>
It's more difficult to get static-configuration hardware to use all of its
resources than to reconfigure an exact hardware match to a problem. I'll bet
that throughput figures for super-scalar processors are well below the
theoretical maximum - with sharply diminishing returns with each additional
parallel unit.
>
>Another thing is, and here I'm indeed curious, what are the basic
>hardware building blocks you want to arrange in a thread-specific
>fashion? The finer the granularity, the costlier the reprogramming.
>
I'd personally go for the lowest possible, to allow the compiler the maximum
opportunity for optimization (including sharing etc.). This approach is
analagous to the realisation of the RISC revolution that its better to have
a simple instruction set and let the compiler do the hard work.

Roland



Article: 16443
Subject: JTAG: Altera & Xilinx
From: Vasant Ram <nospamvasantr@utdallas.edu>
Date: 22 May 1999 09:36:03 GMT
Links: << >>  << T >>  << A >>
Hello! I've got the Xilinx FPGA/JTAG parallel port cable that comes as
part of the XC9500 demo board, and wanted to know if it it was at all
possible to use this with Altera devices. If not, is there any freeware
schematic so one could make an Altera JTAG programmer? Is it possible to
program a JTAG device independent of software (MAXPLUS II vs. Foundation,
etc)?

Also, can one actually use JTAG to apply stimulus to the EPLD/observe
signals? I've always been a bit confused about that.

Thanks for the help.
Vasant

remove nospam to e-mail.
Article: 16444
Subject: Re: High Speed Reconfigurability
From: "Roland PJ Pipex Account" <rolandpj@bigfoot.com>
Date: Sat, 22 May 1999 10:47:43 +0100
Links: << >>  << T >>  << A >>

-----Original Message-----
From: Tim Tyler <tt@cryogen.com>
Newsgroups: comp.arch.fpga
Date: 21 May 1999 18:55
Subject: Re: High Speed Reconfigurability


>Roland Paterson-Jones <rpjones@hursley.ibm.com> wrote:
>
>[HotSpot...]
>
>: Now, why not do the same thing, but right down to the hardware, rather
than
>: down to machine code. What you need, however, is a general compiler from
a
>: high-level language (Java bytecode?) to fpga gates. [...]
>
>This would probably wind up being a rather poor way of exploiting the
>power of the available hardware ;-/

In the same way that a pentium/<paste your least favourite cpu> is a poor
way of exploiting 16m(?) transitors ;-)

>Apart from it's thread support, Java's pretty inherently serial.  The
>thought of attempting to extract parallelism from such a serial stream of
>instructions usually makes me feel nauseous.
>
...but this is exactly what any optimising compiler does - evaluating
dependencies is quite close to extracting paralellism - you just need to
schedule for an infinite-scalar machine, plus conditionals and both their
branches can be evaluated simultaneously, plus loops can be unrolled to the
limit of the hardware... The HANDEL-C examples where the explicit 'par'
keyword is used to parallelise loops, didn't look too hard to auto-detect. I
imagine any detection in C code is going to be more difficult than that for
Java (due to C's dirty semantics).

I'm not too interested in extracting 100% from an architecture (the answer
to this is handcoding to the lowest level). What I'm interested in is the
fastest way to run software that is easily written. I'm convinced that the
compiler needs to do the hard work, not the programmer. This becomes crucial
as the scale of a program grows.


>Java's ability to exploit parallelism (aside from threading code)
>really isn't very good.  [...] Writing a smart compiler to examine loops to
>see if this sort of thing is occurring seems to me to be a backwards
>approach: a /sensible/ concurrent language should allow this type of
>parallelism to be explicitly indicated, rather than expecting advanced AI
>techniques to extract the information about when it can and can't be done
>from the code.

Good point. Are you suggesting a HANDEL-C-like approach? I'd like to be
convinced that a compiler is really unable to detect these things. Do you
have an example that would demonstrate this to me?

(on the other hand, what if you're wrong, and it's not parallelisable?
Should the compiler verify all such parallelisable assertions? This is
likely to be as difficult as auto-detecting them in the first place. If this
stuff is difficult for a compiler to detect, then it's sure to be difficult
for a human to decide for all but the smallest examples.)

Regards
Roland


Article: 16445
Subject: Re: How synthesize tools concern with size of the design?
From: s_clubb@NOSPAMnetcomuk.co.uk (Stuart Clubb)
Date: Sat, 22 May 1999 11:27:37 GMT
Links: << >>  << T >>  << A >>
On Fri, 21 May 1999 17:55:57 +0100, David Pashley
<David@edasource.com> wrote:

<snip>

>However, I was trying to substantiate a point (that while you might
>debate FPGA Express, DC is *clearly* competitive), not promote Synopsys.

DC is "competitive" because:
Effectively first in market
Huge installed base
Library support
Numerous "add-ons"

DC does not typically compete on conventional points set in the FPGA
market such as QoR, speed, ease of use, and price. This is to some
extent because the cost dynamics of the market are very different.

>What I'm trying to work out is why the criticisms levelled at FPGA
>Express in terms of language don't also affect DC (or maybe they do?).

They do, but having fantastic language support is bugger all use when
the ASIC vendor says "we sign off and only support DC". The amount of
scripting, code changing, and general dicking about that most DC users
seem to accept in an ASIC flow is indicative that some improvement may
be welcome. By being effectively an "Industry Standard" in a very
significant market, Synopsys can (to some extent) do what it likes in
terms of language support. This is not the case with FPGA synthesis.

IMHO, Express exists as an OEM purely to queer the pitch for other
FPGA synthesis vendors who either are, or (as the FPGA/ASIC line blurs
even more) will, try to "muscle in" on what is considered Synopsys
turf. It's certainly a strange business model for a high-value,
high-touch EDA company like Synopsys.

Closer examination and interpolation of Synopsys financial figures
from '98 reveals that their revenue stream is probably now close to
being 45% maintenance. Synthesis and associated product accounts for
45-50% of total revenue so the SEAT sales of Synthesis AND "Design
creation" products might soon account for less than a quarter of
revenue. Take a look at the Synopsys line card and there are quite a
few products that could be termed "design creation". With a seat of DC
costing some 100K, the actual number of DC seats being sold might not
be as impressive as the headline revenue figure might suggest.

I heard a rumour that Synopsys sales people are no longer remunerated
on DC sales as their job is to "sell-up" on the back of the headline
product. Could be just a rumour though...

Cheers
Stuart
An employee of Saros Technology:
Model Technology, Exemplar Logic, TransEDA, Renoir.
www.saros.co.uk
Article: 16446
Subject: Re: Assigning pad type in Xilinx Virtex FPGA
From: s_clubb@NOSPAMnetcomuk.co.uk (Stuart Clubb)
Date: Sat, 22 May 1999 11:55:08 GMT
Links: << >>  << T >>  << A >>
On Wed, 19 May 1999 16:51:04 -0500, Tom McLaughlin
<tomm@arl.wustl.edu> wrote:

>I've looked at all of the manuals and finally decided to ask the list.
>I want to assign the pads for my XCV1000 devices and assign I/O
>standards to those pads such as LVTTL, LVCMOS, drive strength and such.
>I am using Leonardo and cannot find an attribute to do this.  I can find
>attributes to assign slew rate and such, but not what kind of pad it
>is!!!

Tom,

The GUI is probably staring you in the face :-)

Load the library for Virtex and then load your design by either
reading, or analysing and elaborating.

When you read/elaborate your design you will find that the constraint
tab extracts your clocks, inputs, outputs, Bidi's, signals, modules,
and paths etc.

The clock "power-tab" will enable explicit buffering with a BUFGP,
while the input and output tabs will allow you to assign through the
BUFFER pull-down menu, all the various types of Virtex I/O. You can
assign to busses, or individual pins. The constraint file or script
command is quite simply "PAD" so to pad a single signal "ena" in the
"work" library of an entity called "counter" with an "rtl"
architecture, you would produce:

PAD IBUF_SSTL2_II .work.counter.rtl.enable

For busses, you can use wild cards and bus selection in the form:

PAD OBUF_HSTL_III .work.counter.rtl.q(*)

or

PAD OBUF_HSTL_III .work.counter.rtl.q(7:4)
PAD OBUF_HSTL_I .work.counter.rtl.q(3:0)

etc.

Cheers
Stuart
For Email remove "NOSPAM" from the address
Article: 16447
Subject: IOB tristate register in Xilinx XLA devices
From: "Edward Moore" <edmoore@edmoore.demon.co.uk>
Date: Sat, 22 May 1999 13:43:48 +0100
Links: << >>  << T >>  << A >>
Has anyone successfully used the iob tristate registers in a 4000-XLA device
?.

I am trying to decrease the clock to tristate time of some outputs connected
to ZBT rams. I've read the relevant xilinx app-note on how to use the iob
.nmc hard-macro, and i can't see any sensible way of doing a front-end
simulation of the xilinx device and the rams, since the iob macro doesn't
have a PAD pin.

It seems i would need to have one design for simulation, using a model of
the iob register connected to the rams, and another design for P&R which
doesn't have the i/o pins. Is this correct ?.

I was hoping for Unified Library support for the iob tristate flip-flop, but
Xilinx seems to be moving away from using iob library primitives (they don't
seem to be used for Virtex's). So has anyone heard of when the P&R software
will be able to utilize the tristate registers automatically ?

Edward Moore

PS : you can use a -XL bitstream in a -XLA device, and suprisingly
vice-versa.


Article: 16448
Subject: Re: JTAG: Altera & Xilinx
From: Dave D'Aurelio <daurelio@capture.kodak.com>
Date: Sat, 22 May 1999 11:40:19 -0400
Links: << >>  << T >>  << A >>
Vasant,

Attached is a link to ByteBlaster MV datasheet on the Altera website.

http://www.altera.com/document/ds/dsbytemv.pdf

It contains the schematic for the byteblaster cable (I would suggest the mv
version, rather than the standard Byteblaster, as it supports the 3.3V parts
as well as the 5V ones).
To answer your question on s/w independent programming, check out Altera's
JAM initiative ...

http://www.altera.com/html/mktg/isp-jam.html

Hope this helps,

Dave

Vasant Ram wrote:

> Hello! I've got the Xilinx FPGA/JTAG parallel port cable that comes as
> part of the XC9500 demo board, and wanted to know if it it was at all
> possible to use this with Altera devices. If not, is there any freeware
> schematic so one could make an Altera JTAG programmer? Is it possible to
> program a JTAG device independent of software (MAXPLUS II vs. Foundation,
> etc)?
>
> Also, can one actually use JTAG to apply stimulus to the EPLD/observe
> signals? I've always been a bit confused about that.
>
> Thanks for the help.
> Vasant
>
> remove nospam to e-mail.

--
"The Jerry Springer show is positive proof that there is insufficient
Chlorine in the gene pool"

- Dave D'Aurelio, 1/99


Article: 16449
Subject: Re: High Speed Reconfigurability
From: brian_n_miller@yahoo.com
Date: Sun, 23 May 1999 01:43:28 GMT
Links: << >>  << T >>  << A >>
sc@vcc.com wrote:
>
> When ... you have a tera gate system with giga bytes
> of memory diffused through out, ... it will be too hard
> to design a CPU make it testable and get it into production.
> It will take 20 yr just to do the test vectors.

Between symmetric multiprocessing and distributed computing,
we shouldn't need great processors to make great systems.
If the network is to be the computer.  Everything I hear about
scalability regards more CPUs, not fancier ones.  IIRC, even
Semour Cray finally chose x86s.

> Everything will be reconfigurable and advances in computing
> will come at the reconfigurable cell/architecture level.

And the motivation for such a paradigm shift would be?  Speed?
It certainly can't be for testability.  If anything, a
reconfigurable machine is going to be the hardest to test.
You mentioned test vectors.  Doesn't a reconfigurable machine
require more test patterns than a fixed machine of identical
transistor count?


--== Sent via Deja.com http://www.deja.com/ ==--
---Share what you know. Learn what you don't.---


Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search