Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hello Michael, All simulation library files necessary for an Altera FPGA functional/post-fitting simulation are installed automatically with the Quartus II software. The simulation library files, in Verilog and in VHDL, are located in the <Quartus II Installation directory>/eda/sim_lib directory. For more information on Altera functional library files, please refer to http://www.altera.com/support/software/nativelink/quartus2/eda_ref_presynth_lib.html For more information on Altera post-fitting library files, please refer to http://www.altera.com/support/software/nativelink/synthesis/dc/eda_ref_dc_postsynth_lib.html When performing post-fitting simulations, the netlist is generated in the <Quartus II project directory>/simulation/<tool name> directory. You can create your simulation libraries in this directory. I'm not sure which simulation tool you are using, but we do have more detailed information in our Quartus II handbook chapters: Mentor Graphics ModelSim Support http://www.altera.com/literature/hb/qts/qts_qii53001.pdf Synopsys VCS Support http://www.altera.com/literature/hb/qts/qts_qii53002.pdf Cadence NC-Sim Support http://www.altera.com/literature/hb/qts/qts_qii53003.pdf I hope this helps, Albert Chang Altera Corporation Senior Applications Engineer Michael Laajanen wrote: > HI, > > I am very new to Quartus(5.1 Solaris) and looking for simulation > libraries for both Verilog and VHDL, vhdl seams to be found in the > install tree but verilog? > > When I used Xilinx I always started compiling a simulation libraries > once for all into the installation tree for each simulator to use, how > is this done i Quartus? > > cheers > > MichaelArticle: 96801
"why_don't_you_listen?" <test@test.com> wrote in message news:ee97e37.21@webx.sUN8CHnE... > Thomas gave the answer to the problem. David second it. Why not just > adding two flops at sel and add? This will solve the problem. Well, I think David gave the answer, and he didn't suggest 'just adding two flops on sel and add'. He correctly pointed out that this won't work, and you need a separate strobe synchronised by flops. HTH, Syms.Article: 96802
Guys, This is a long history. The project doesn't only include the EPLD, but also many other expensive components. Unfortunately the change will interest a lot of board already sold. To change the PCB is certainly possible and I believe that this is the best solution, but it is not acceptable for the marketing. Someone has advised to look at the Lattice or Actel. products. I will look for there also. The solution proposed by Rene would be a true folly, also for a prototype. ThanksArticle: 96803
fpga_toys@yahoo.com wrote: > rickman wrote: > > If your VOIP started dropping packets so that your phone calls were > > garbled and the provider said, "of course, we had a hot day, do you > > expect to see the same speeds all the time?", would you find that > > acceptable? > > IT HAPPENS!!! Reality Check ... IT HAPPENS EVERY DAY. Packets are dropped, but for congestion reasons, not because the air handling in the switch room set the temperature up a few degrees. -aArticle: 96804
Isaac Bosompem wrote: > Hi guys, I've been reading through the Spartan3 architecture embedded > multipliers app note and I can't seem to find out how long (in terms of > clock cycles) the sync multipliers in the Spartan3 will take. Can I > safely assume that after I have asserted the inputs to the module, I > will get the output back in the following clock cycle? I'm assuming that you're reading XAPP467. Is this correct? http://direct.xilinx.com/bvdocs/appnotes/xapp467.pdf In Spartan-3, there is a single pipeline stage option between the multiplier inputs and outputs. Essentially, the stage is after calculating the partial sums and before presenting the product output. In Spartan-3E, you have the additional option for a single- or double-stage pipeline. In Spartan-3E, the pipeline state is either at the inputs or at the product outputs, or both. --------------------------------- Steven K. Knapp Applications Manager, Xilinx Inc. General Products Division Spartan-3/-3E FPGAs http://www.xilinx.com/spartan3e --------------------------------- The Spartan(tm)-3 Generation: The World's Lowest-Cost FPGAs.Article: 96805
Jim Granville wrote: > rickman wrote: > > But the real issue is what do you do with the excess speed of the async > > design at room temp, etc? Your design has to meet specific goals over > > all variables of temp, voltage and process. > > You are too focused on the MHz - forget the MHz for a moment, > and look at the pJ and uV/m. > Many, many designers would be very happy to get those gains, > and still be in the same MHz ballpark. We are still not communicating. You are assuming that the numbers from the vendor are a valid comparison. They don't say didly about what software was running and what was done to save power in the sync clocked version. I don't accept that the chip is unique in its power savings. I consider this to be just one way to save that level of power, and likely one of the more difficult ways. > The High end CPU you mentioned, quotes 15,000 gated clock elements. > At that count, it has to asympotpe to async performance anyway, > and it becomes a semantics exercise what you call a device > with that many gated/local/granular clocks... But it *is* a sync clocked design. My point is merely that an async design does nothing that I can't do in a good sync clocked design if power savings is my goal. Further I expect that it is easier to do a sync design because of the experience we have with it and the way it easily maps to the system level requirements. Consider testing. I have to test my design against system level requirements. In a sync design, this just means you test a range of code which may not even be the code the unit is shipped with. This testing can be done at any temperature and any voltage and any device. Verification that the devices work at the clock speed is done separately. System testing just has to verify that the clock speed is fast enough. Wiith an async processor you need to test your full code for all paths under worst case conditions (including process) to verify that you meet the system timing requirements. How the heck do you do that??? With FPGAs you use static timing analysis to verify that you meet timing. Do they have static timing analysis for software?Article: 96806
Sky wrote: > Guys, > This is a long history. The project doesn't only include the EPLD, but also > many other expensive components. > Unfortunately the change will interest a lot of board already sold. > To change the PCB is certainly possible and I believe that this is the best > solution, but it is not acceptable for the marketing. > Someone has advised to look at the Lattice or Actel. products. I will look > for there also. > The solution proposed by Rene would be a true folly, also for a prototype. Unfortunately some of the pins have a fixed assignment, they usually are the programming pins TCLK, TMS, TDI, TDO and CLK plus the power pins. At least Altera has no system as to assigning these pins over the families. Meaning in most cases the footprint is not upgradeable. And since the manufacturer do not talk to each other in the interest of the customers, you cannot expect a competitor product to fit into the footprint what the fixed pins concerns. Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.netArticle: 96807
Andy Peters wrote: > fpga_toys@yahoo.com wrote: > > rickman wrote: > > > If your VOIP started dropping packets so that your phone calls were > > > garbled and the provider said, "of course, we had a hot day, do you > > > expect to see the same speeds all the time?", would you find that > > > acceptable? > > > > IT HAPPENS!!! Reality Check ... IT HAPPENS EVERY DAY. > > Packets are dropped, but for congestion reasons, not because the air > handling in the switch room set the temperature up a few degrees. It happens when the router is unable to keep up with the traffic, aka congestion. Doesn't really matter why it's slower than the arrival rate. If async can deliver a faster packet processing rate when the environmentals are better, then the cogestion point is moved higher for those periods, unlike being locked to worst case performance.Article: 96808
Hi, I'm trying to enable SMP for a virtex-ii pro board. PPC doesn't support cache coherency, but that won't be an issue for the shared memory design I'm going to do. Anyhow, I keep getting error messages for undefined references such as __save_cpu_setup(), hash_table_lock, etc. I've tried this in kernel2.4 and 2.5, which I downloaded from monta vista (for ml300 support). I also tried 2.6.15.4 from kernel.org, but I couldn't find the SMP option. Also, it seems that both 2.5 and 2.6.15.4 removed xilinx sysace support. Does anyone know a kernel source that works for virtex-ii pro with SMP enabled? Is there other way to get around? Thanks. -EricArticle: 96809
rickman wrote: > We are still not communicating. you got that right. > My point is merely that an async > design does nothing that I can't do in a good sync clocked design if > power savings is my goal. Wrong. In EVERY case, for each instance a chip used, you need to design/test for worst case parameters, and hope they do not change over the chip's life due to migration and other effects, to run the chip against the wall with sync designs ... you have to leave margin, that varies chip by chip. With async, that problem simply is not a problem. Again, just like the point about designing to be glitch free for power, you again have this wrong because you lack the understanding of current async design methodologies, and continue to argue based on that lack of understanding. > Wiith an async processor you need to test your full code for all paths > under worst case conditions (including process) to verify that you meet > the system timing requirements. Wrong, this is not necessary. The problem is addressed at design by using verifiably correct logic constructions to build the logic which are safe and hazzard free by design. After that, it doesn't matter what process or environmental variations may impact the device. Again, you completely lack the understanding of async design, and do not even understand what you are claiming is false. Learn about async design, before you just assume more false positions, and continue to argue from baseless positions.Article: 96810
On 9 Feb 2006 21:24:28 -0800, "PeterC" <peter@geckoaudio.com> wrote: >As far as Peter's comments - I simply don't know exactly what the >jitter spec and freq resolution should be - it all depends on other >parts of the system which are being simultaneously designed. It comes >down to a certain amount of experimentation to see how the audio DAC >output spectrum will behave with jittery clocks. 1.2ns of jitter would be good for about 14 bits accuracy on a 20kHz signal, assuming a traditional multi-bit DAC. Oversampling, noise-shaping DACs ("1-bit" outputs) tend to be less tolerant of jitter as it affects the entire audio spectrum instead of mainly high frequency signals. - Brian.Article: 96811
rickman wrote: > My point is merely that an async > design does nothing that I can't do in a good sync clocked design if > power savings is my goal. Now ... the point is, that the clock net will consume considerable power and is pure overhead. On FPGAs the clock net is typically responsible for 20-40% of total device power. With fast processors, 30% of the power is in the clock distribution alone: http://www.tkt.cs.tut.fi/kurssit/8404941/S04/chapter5.pdf This causes huge die heating, cooling, and thermal gradient problems, and large clocked designs start out with this huge overhead in power ... that frankly pretty much just goes away with async. This is not a new problem, it's been the core problem for about five years now. And is getting very well understood. Consider: http://www.mwrf.com/Articles/Print.cfm?ArticleID=7278 where the author makes this point: "Circuits with very high power consumption exhibit significant thermal gradients across the chip. This is problematic because conventional timing analysis assumes a single temperature for the entire device, even though it is well known that timing is temperature dependent. The usual response is simply to minimize overall chip temperature through the use of more sophisticated, and expensive, packaging." The application logic consumes power independent of the clock nets, and is where the real work is done. Async adds additional logic but few transistions, which may consume a very small static current (compared to the total clock net power), to completely remove the clock net and associated power. The power win for async in designs, is when the extra power for the ack transistions is less than the clock net it replaces. Since the ack routing is generally very short, it's frequently considerably lower power than the globablly routed clocks. For async ASIC designs, that frees up the clock net metalization and buffers, to use for the application logic, which offsets the additional logic and routing to help balance costs, or improve them. So the point is, that the applicaion logic power costs are fixed, and driven by the design. The clock net overhead is just that, and using a careful async design the clock net power can be removed and replaced with async logic that has lower power costs. The benefits are that the clock skew in the designs no longer limits performance, nor does worst case environmentals, which are growing worse due to unbalanced heating of the die. Async avoids these problems by design, at modest costs which are offset by removing the clock net resources.Article: 96812
fpga_toys@yahoo.com wrote: > rickman wrote: > > We are still not communicating. > > you got that right. > > > My point is merely that an async > > design does nothing that I can't do in a good sync clocked design if > > power savings is my goal. > > Wrong. In EVERY case, for each instance a chip used, you need to > design/test for worst case parameters, and hope they do not change > over the chip's life due to migration and other effects, to run the > chip > against the wall with sync designs ... you have to leave margin, that > varies chip by chip. With async, that problem simply is not a problem. > > > Again, just like the point about designing to be glitch free for power, > you > again have this wrong because you lack the understanding of current > async design methodologies, and continue to argue based on that lack > of understanding. > > > Wiith an async processor you need to test your full code for all paths > > under worst case conditions (including process) to verify that you meet > > the system timing requirements. > > Wrong, this is not necessary. The problem is addressed at design by > using > verifiably correct logic constructions to build the logic which are > safe and > hazzard free by design. After that, it doesn't matter what process or > environmental variations may impact the device. > > Again, you completely lack the understanding of async design, and do > not even understand what you are claiming is false. > > Learn about async design, before you just assume more false positions, > and continue to argue from baseless positions. I was going to ignore your posts because you seem to insist on being rude. But I will say this before dropping this discussion with you. I am not talking about testing the chip as you indicate above. I am talking about the system level design issues. When I write code on a sync clocked CPU, the execution time is deterministic, even if it is too complex for me to analyze 100%. I will see the same results all the time. So the chip must be tested over temperature,etc, but I only need to test my software at room temperature. In the async clocked CPU, the system level timing varies with all the things I can't control. So I have no way to test my system to be sure it will work, except to test at room temp and then derate for the three big factors, temp, voltage and process. Then we are back to where we started, but now we have to leave slack in the hardware between the clock and data path and we also have to leave slack again at the system level. This has nothing to do with how you design or build the CPU chip. This is an artifact of a non-deterministicly timed CPU. In the end there are very few apps where an async clocked processor has any real benefit. So please don't be rude and tell me I don't understand the design when you don't understand my statements.Article: 96813
Steve Knapp (Xilinx Spartan-3 Generation FPGAs) wrote: > Isaac Bosompem wrote: > > Hi guys, I've been reading through the Spartan3 architecture embedded > > multipliers app note and I can't seem to find out how long (in terms of > > clock cycles) the sync multipliers in the Spartan3 will take. Can I > > safely assume that after I have asserted the inputs to the module, I > > will get the output back in the following clock cycle? > > I'm assuming that you're reading XAPP467. Is this correct? > http://direct.xilinx.com/bvdocs/appnotes/xapp467.pdf > > In Spartan-3, there is a single pipeline stage option between the > multiplier inputs and outputs. Essentially, the stage is after > calculating the partial sums and before presenting the product output. > > In Spartan-3E, you have the additional option for a single- or > double-stage pipeline. In Spartan-3E, the pipeline state is either at > the inputs or at the product outputs, or both. > > --------------------------------- > Steven K. Knapp > Applications Manager, Xilinx Inc. > General Products Division > Spartan-3/-3E FPGAs > http://www.xilinx.com/spartan3e > --------------------------------- > The Spartan(tm)-3 Generation: The World's Lowest-Cost FPGAs. Yes, That is the app note I have read. So the unit is pipelined. Alright, so I should account with it by allowing 2 cycles? (1 for the data to propogate through partial sums muxes and another for the adders to obtain the product and to present the output?)Article: 96814
rickman wrote: > So please don't be rude and tell me I don't understand the design when > you don't understand my statements. I rather feel the same way, when you strongly dismiss points with the assertion that you know best, and everyone else has to be wrong.Article: 96815
Note the word "option". You can run pipelined or not pipelined. Obviously the clock rate must be lower in the non-pipelined mode. But there you get the result out with just a combinatorial delay. Peter AlfkeArticle: 96816
We are considering a change to the IO standard used for the QDR-II interface (1.5V HSTL Class 1 instead of 1.8V HSTL Class 1 (1.8V)). Xilinx has not created any demo boards that use the 1.5V interfaces, but they claim that it should work fine. Have any of you completed a Xilinx design that uses the 1.5V interfaces (for QDR-II) or know of a successful development?Article: 96817
On Fri, 10 Feb 2006 14:23:39 +0100, "Sky" <dev2-renato_noSpam@usa.net> wrote: >In a project I use the Altera EPM3256ATC144-10. >Now I have the necessity to make some changes to the project, but I don't >have enough macrocells in the actual devices. >Alteras doesn't have a pin-to-pin compatible EPLD with the EPM3256ATC144-10 >but with more macrocelles (about +40%). >What of you knows a devices that could resolve my problem? Unfortunately I >cannot modify the PCB, but I could replace the Altera EPLD with any other >CPLD. >Thanks There are companies that make transition boards (including customs) that have a footprint on the bottom (and pins) that match an existing board layout and on the top is a new foot print or even room for multiple chips. This would allow you to keep existing boards, and change over to prettty much anything on the top side of the board (CPLD, FPGA, other vendors, ...) Of course, you need the vertical clearance for this type of desperate alternative, and maybe horizontal clearance too. http://www.arieselec.com/products/correct.htm plus, they will do custom designs. Philip Philip Freidin FliptronicsArticle: 96818
rickman wrote: > Jim Granville wrote: > >>rickman wrote: >> >>>But the real issue is what do you do with the excess speed of the async >>>design at room temp, etc? Your design has to meet specific goals over >>>all variables of temp, voltage and process. >> >>You are too focused on the MHz - forget the MHz for a moment, >>and look at the pJ and uV/m. >>Many, many designers would be very happy to get those gains, >>and still be in the same MHz ballpark. > > > We are still not communicating. You are assuming that the numbers from > the vendor are a valid comparison. Shouldn't the engineer in you say 'show me the silicon?', rather than sweeping dismissal of all things async, including published info. Of course, it is their own comparison, but normally such comparisons try to talk up what are actually small differences [just look at Xilinx vs Altera marketing noise ...]. The data they show, for EMC and pJ/Opcode, is orders of magnitude stuff. > They don't say didly about what > software was running and what was done to save power in the sync > clocked version. I don't accept that the chip is unique in its power > savings. or its EMC improvement ? > I consider this to be just one way to save that level of > power, and likely one of the more difficult ways. Strange, then, that you believed the other device's specs, (also pre-silicon) with the 15,000 gated clocks, straight off ? Who claimed this was easy ? It's what they must have done in the tools area, that impresses me as much as the (claimed) silicon results. -jgArticle: 96819
I see a link to my web site referenced here concerning logic glitches; http://www.interfacebus.com/Design_Logic_Timing_Hazards.html However; because the post is so large, I need to look at it tomorrow for a proper reply. I do see many references to power dispassion. Power dispassion is not an issue, as one gate with a transient, dissipates very little power. An FPGA design always has a 'near-by' flip flop. I need to read the full post. The main site is: http://www.interfacebus.com/Article: 96820
Jim Granville wrote: > Shouldn't the engineer in you say 'show me the silicon?', rather than > sweeping dismissal of all things async, including published info. > > Of course, it is their own comparison, but normally such comparisons > try to talk up what are actually small differences [just look at Xilinx > vs Altera marketing noise ...]. > The data they show, for EMC and pJ/Opcode, is orders of magnitude stuff. I did not see ANY data that was "orders of magnitude". I saw that they were about three times less power at an equivalent speed. I'm not saying that the technology can't save power. I am saying that you don't have to toss out the baby with the bath water. They are comparing a power optimized design to a non-power optimized one. We also know nothing about the program they used which may favor the async processor because it does not try to save power in the sync processor. Hey, if there is real data out there showing me how this works and that it is clearly better, fine. I'm just saying this is not that sort of data. > or its EMC improvement ? The EMC is significant, but again, is it being compared to an EMC optimized sync processor... no. I have seen standard clocked designs that were optimized for EMC. > Strange, then, that you believed the other device's specs, > (also pre-silicon) with the 15,000 gated clocks, straight off ? I'm not doubting the data, I'm doubting the comparison. Do you see the difference? > Who claimed this was easy ? It's what they must have done > in the tools area, that impresses me as much as the > (claimed) silicon results. Yes, I am sure it was a lot of work and that is part of my concern with it. But as long as it is *their* work, if they start making chips that solve my system problems better than other chips, then I'll use them. But this chip actually runs slower max speed in the same process. Did you notice that? The clocked processor runs up to 100 MHz, IIRC while the async processor was only 77 MHz room temp!Article: 96821
Interfacebus.Engineer@gmail.com wrote: > I see a link to my web site referenced here concerning logic glitches; > http://www.interfacebus.com/Design_Logic_Timing_Hazards.html Nice job presenting the material on that page :)Article: 96822
Add an Ethernet MAC to your FPGA to enable control from a PC. Simple & easy. No driver programming. Direct control of FPGA state machines, registers & memories. No TCP/IP/UDP needed. Support multiple audio & video streams. Low gate count & low cost, no royalties. www.chipenet.comArticle: 96823
In article <yS5Hf.4488$9G6.474@tornado.fastwebnet.it>, dev2- renato_noSpam@usa.net says... > The solution proposed by Rene would be a true folly, also for a prototype. > Thanks How about an adapter PCB? Put a "bigger" part on the board and mount the board in place of the old part. Kind of like those adapter boards that let you adapt an SMT part to a DIP socket or other footprints.Article: 96824
Sky wrote: > Guys, > This is a long history. The project doesn't only include the EPLD, but also > many other expensive components. > Unfortunately the change will interest a lot of board already sold. > To change the PCB is certainly possible and I believe that this is the best > solution, but it is not acceptable for the marketing. That's fine, they always ask for that :) What you do then, is what Philips suggests, and create a carrier PCB, that underneath/on edges looks like a TQFP144, and on the top, has whatever package/device/psu fits. Maybe a BGA MachXO, MAX II if the IO voltages will allow. Then, give them the price for that option. Nothing like some $$ to sharpen their focus :) -jg
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z