Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 161675

Article: 161675
Subject: Re: PipelineC - C-like almost hardware description language - AWS F1 Example
From: Julian Kemmerer <absurdfatalism@gmail.com>
Date: Mon, 23 Mar 2020 13:15:29 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sunday, March 22, 2020 at 6:43:31 AM UTC-4, Tom Gardner wrote:
> On 22/03/20 01:15, Julian Kemmerer wrote:
> > Hi folks,
> > Here to talk about PipelineC.
> 
> With anything like this you have 30s to convince me
> to spend some of my remaining life looking at it rather
> than something else. Hence I want to see:
>   - what benefit would it give me, and how
>   - what won't it do for me (it isn't a panacea)
>   - what do I have to do to use it (scope of work)
>   - what don't I have to do if I use it (I'm lazy)
>   - how it fits into the well-documented toolchains
>     that many people use (since it doesn't do everything)
> 
> If I see the negatives, I'm more likely to believe
>     that many people use (since it doesn't do everything)
> the claimed positives.

Give a quick go:

what benefit would it give me, and how:
Feels like RTL when doing clock by clock logic, and can auto pipeline logic otherwise.

what won't it do for me (it isn't a panacea):
Not a full RTL replacement yet. Would love help to get it there.

what do I have to do to use it (scope of work)
Write C-looking code, tool generates VHDL that can dropped into any existing project. Mostly a matter of time to run the tool in addition to already long builds.

what don't I have to do if I use it (I'm lazy):
Dont have to manually pipeline all you logic to specific devices / operating frequencies. Can share 'cross-platform' code.

how it fits into the well-documented toolchains:
Outputs VHDL. And C-looking code can be used with gcc for debug/modeling.

Thanks eh!

Article: 161676
Subject: Use example of Intel University program in Intel Quartus - problem with Board support package?
From: Bliad Bors <nikkischulz6@gmail.com>
Date: Tue, 24 Mar 2020 09:19:08 -0700 (PDT)
Links: << >>  << T >>  << A >>


I want to use a example from the Intel FPGA Monitor Program 18.1 and use it=
 in Quartus 18.1. It is the video example, which creates a blue box on the =
HDMI output and writes a littel String with white letters on top of it.

I want to use it in Intel Quartus environment , do some test-outputs on my =
screen and finally add some more Hardware to the Avalon system. Unfortunate=
ly it doesnt work for me as i thought xD:

short file overview:
https://www.google.com/url?sa=3Dt&rct=3Dj&q=3D&esrc=3Ds&source=3Dweb&cd=3D1=
&ved=3D2ahUKEwiHq8Ha0bHoAhXOsKQKHUu6DdIQFjAAegQIAxAB&url=3Dftp%3A%2F%2Fftp.=
intel.com%2FPub%2Ffpgaup%2Fpub%2FIntel_Material%2F18.1%2FComputer_Systems%2=
FDE10-Nano%2FDE10-Nano_Computer_NiosII.pdf&usg=3DAOvVaw1-HoCsnC7Vin21PNHOAs=
mE

Project File: DE10_Nano_Computer.qpf

QSYS Konfiguration File: Computer_System.sopcinfo

SRAM File : DE10_Nano_Computer.sof

NIOSII Main: video.c

NIOSII library: address_map_nios2.h

Project includes:

DE10-Nano_Computer_NiosII.pdf

I/O Peripheral | Qsys Core

    On-chip memory
    character buffer Character Buffer for Video Display
    SD Card SD Card Interface
    Red LED parallel port Parallel Port
    Expansion parallel ports Parallel Port
    Slider switch parallel port Parallel Port
    Pushbutton parallel port Parallel
    Port JTAG port JTAG UART
    Interval timer Interval timer
    System ID System ID
    Peripheral Audio port Audio
    Video port Pixel Buffer DMA Controller

Test1: Open FPGA Monitor Program 18.1 - create new project - select video e=
xample - sof is downloaded on FPGA - compile & load video.c Result: works H=
DMI shows test-string

Test2: download .sof to FPGA - Eclipse for Nios - new project simple hello =
world with bsp -work with .sof-put video.c and address_map_nios2.h into pro=
ject- use video.c as main, Result: works HDMI shows test-string

Test 3: do the same as Test2 , Result: random pixels in the first ~20 lines

Test 4: reinstall FPGA Monitor Program 18.1 do the same as Test2 Result: wo=
rks HDMI shows test-string

Test 5: do the same like Test2, doesnt work, do the same like Test4 Result:=
 random pixels in the first ~20 lines

Test 6: copy .elf from my FPGA Monitor Program 18.1 software directory into=
 project folder, run this this elf Result: works HDMI shows test-string

Test 7: change something of the video.c of Test 6, Result: works HDMI shows=
 test-string but without the blue box !

Test 8: do the same like Test2 Result: random pixels in the first ~20 lines

Test 9: Check run configurations : select all combinations of processor and=
 byte stream devices Result: random pixels in the first ~20 lines

Test 10: Switch to FPGA Monitor Program 18.1, compile & Load video.c Result=
: works HDMI shows test-string

Check: Description in https://home.isr.uc.pt/~jfilipe/files/Final_Project_S=
implified_Tutorial.pdf ( they do nearly the same...)

Check: Book EMBEDDED SoPC DESIGN WITH NIOS II PROCESSOR AND VERILOG EXAMPLE=
S : They say: BSP Editor will get the sopcinfo file and support you with yo=
ur access to the Hardware. Without configuring much

Check: Intel BSP documents : hey say: BSP Editor will get the sopcinfo file=
 and support you with your access to the Hardware. Without configuring much

Check: Intel The Nios=C2=AE II Processor: Hardware Abstraction Layer in you=
tube: https://www.youtube.com/watch?v=3DHF7Low_sUig

I suppose that something is wrong eather with my selected sopcinfo or with =
the BSP. Maybe you can give me some advice, tell me if you need more inform=
ation ! :) Thank you :D

Here are some screenshots of my development environment:

https://de.scribd.com/document/452954331/Altera-Nios-II-BSP-Summary

https://de.scribd.com/document/452954367/Question-1

Article: 161677
Subject: Re: Use example of Intel University program in Intel Quartus -
From: Julio Di Egidio <julio@diegidio.name>
Date: Tue, 24 Mar 2020 10:49:35 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, 24 March 2020 17:19:14 UTC+1, Bliad Bors  wrote:
> I want to use a example from the Intel FPGA Monitor Program 18.1 and use it
> in Quartus 18.1. It is the video example, which creates a blue box on the
> HDMI output and writes a littel String with white letters on top of it.
> 
> I want to use it in Intel Quartus environment , do some test-outputs on my
> screen and finally add some more Hardware to the Avalon system. Unfortunately
> it doesnt work for me as i thought xD:
> 
> short file overview:
<ftp://ftp.intel.com/Pub/fpgaup/pub/Intel_Material/18.1/Computer_Systems/DE10-Nano/DE10-Nano_Computer_NiosII.pdf>
<snip>

Have you noticed that the document says "For Quartus Prime 17.1"?  I.e. I am
guessing a version mismatch: in my (admittedly little) experience with Quartus
and related, different versions are neither backwards nor forwards compatible.

Julio

Article: 161678
Subject: Re: Use example of Intel University program in Intel Quartus -
From: Bliad Bors <nikkischulz6@gmail.com>
Date: Tue, 24 Mar 2020 12:56:24 -0700 (PDT)
Links: << >>  << T >>  << A >>
Am Dienstag, 24. M=C3=A4rz 2020 18:49:42 UTC+1 schrieb Julio Di Egidio:
> On Tuesday, 24 March 2020 17:19:14 UTC+1, Bliad Bors  wrote:
> > I want to use a example from the Intel FPGA Monitor Program 18.1 and us=
e it
> > in Quartus 18.1. It is the video example, which creates a blue box on t=
he
> > HDMI output and writes a littel String with white letters on top of it.
> >=20
> > I want to use it in Intel Quartus environment , do some test-outputs on=
 my
> > screen and finally add some more Hardware to the Avalon system. Unfortu=
nately
> > it doesnt work for me as i thought xD:
> >=20
> > short file overview:
> <ftp://ftp.intel.com/Pub/fpgaup/pub/Intel_Material/18.1/Computer_Systems/=
DE10-Nano/DE10-Nano_Computer_NiosII.pdf>
> <snip>
>=20
> Have you noticed that the document says "For Quartus Prime 17.1"?  I.e. I=
 am
> guessing a version mismatch: in my (admittedly little) experience with Qu=
artus
> and related, different versions are neither backwards nor forwards compat=
ible.
>=20
> Julio

Hi yes I only found this ... maybe it is no big difference for this descrip=
tion...
even the link says: 18.1

ftp://ftp.intel.com/Pub/fpgaup/pub/Intel_Material/18.1/Computer_Systems/DE1=
0-Nano/DE10-Nano_Computer_NiosII.pdf


:D

Article: 161679
Subject: Re: Use example of Intel University program in Intel Quartus - problem with Board support package?
From: Theo <theom+news@chiark.greenend.org.uk>
Date: 25 Mar 2020 17:02:06 +0000 (GMT)
Links: << >>  << T >>  << A >>
Julio Di Egidio <julio@diegidio.name> wrote:
> On Tuesday, 24 March 2020 17:19:14 UTC+1, Bliad Bors  wrote:
> > I want to use a example from the Intel FPGA Monitor Program 18.1 and use it
> > in Quartus 18.1. It is the video example, which creates a blue box on the
> > HDMI output and writes a littel String with white letters on top of it.
> > 
> > I want to use it in Intel Quartus environment , do some test-outputs on my
> > screen and finally add some more Hardware to the Avalon system. Unfortunately
> > it doesnt work for me as i thought xD:
> > 
> > short file overview:
> <ftp://ftp.intel.com/Pub/fpgaup/pub/Intel_Material/18.1/Computer_Systems/DE10-Nano/DE10-Nano_Computer_NiosII.pdf>
> <snip>
> 
> Have you noticed that the document says "For Quartus Prime 17.1"?  I.e. I am
> guessing a version mismatch: in my (admittedly little) experience with Quartus
> and related, different versions are neither backwards nor forwards compatible.

That's right, and it's safer to use the specific version mentioned in a
tutorial.  However I don't think it makes a big deal in this case - the IP
cores haven't changed a lot between versions (Quartus Standard feels like
it's on maintenance releases, with all the effort going into Quartus Pro).

To the OP, a blind guess it's either:

1) something wrong with the software (.c file) which is writing the wrong
data into the framebuffer - for instance it's getting the video format
wrong, or it's allocating program data like the stack in memory that happens
to be the framebuffer.  For example, if the memory is small and you put in
extra code into the program the executable would grow in size and it might
start spilling into the framebuffer.

2) something wrong with the hardware that's corrupting the pixels being read
out.  This seems unlikely given the bitfile works for one of the examples.


I'd start by investigating 1) as I think that's more likely.

Theo

Article: 161680
Subject: Re: Use example of Intel University program in Intel Quartus -
From: Bliad Bors <nikkischulz6@gmail.com>
Date: Wed, 25 Mar 2020 12:23:55 -0700 (PDT)
Links: << >>  << T >>  << A >>
Am Mittwoch, 25. M=C3=A4rz 2020 18:02:13 UTC+1 schrieb Theo:
> Julio Di Egidio <julio@diegidio.name> wrote:
> > On Tuesday, 24 March 2020 17:19:14 UTC+1, Bliad Bors  wrote:
> > > I want to use a example from the Intel FPGA Monitor Program 18.1 and =
use it
> > > in Quartus 18.1. It is the video example, which creates a blue box on=
 the
> > > HDMI output and writes a littel String with white letters on top of i=
t.
> > >=20
> > > I want to use it in Intel Quartus environment , do some test-outputs =
on my
> > > screen and finally add some more Hardware to the Avalon system. Unfor=
tunately
> > > it doesnt work for me as i thought xD:
> > >=20
> > > short file overview:
> > <ftp://ftp.intel.com/Pub/fpgaup/pub/Intel_Material/18.1/Computer_System=
s/DE10-Nano/DE10-Nano_Computer_NiosII.pdf>
> > <snip>
> >=20
> > Have you noticed that the document says "For Quartus Prime 17.1"?  I.e.=
 I am
> > guessing a version mismatch: in my (admittedly little) experience with =
Quartus
> > and related, different versions are neither backwards nor forwards comp=
atible.
>=20
> That's right, and it's safer to use the specific version mentioned in a
> tutorial.  However I don't think it makes a big deal in this case - the I=
P
> cores haven't changed a lot between versions (Quartus Standard feels like
> it's on maintenance releases, with all the effort going into Quartus Pro)=
.
>=20
> To the OP, a blind guess it's either:
>=20
> 1) something wrong with the software (.c file) which is writing the wrong
> data into the framebuffer - for instance it's getting the video format
> wrong, or it's allocating program data like the stack in memory that happ=
ens
> to be the framebuffer.  For example, if the memory is small and you put i=
n
> extra code into the program the executable would grow in size and it migh=
t
> start spilling into the framebuffer.
>=20
> 2) something wrong with the hardware that's corrupting the pixels being r=
ead
> out.  This seems unlikely given the bitfile works for one of the examples=
.
>=20
>=20
> I'd start by investigating 1) as I think that's more likely.
>=20
> Theo

Thank you both for your answers :)
------
Ich found by accicent a page on which more current documents are published:
https://software.intel.com/en-us/fpga-academic/learn/tutorials
They are all for 18.1=20
:D

Yes Theo I need to check the software part.

Check: I read a bit of ftp://ftp.intel.com/Pub/fpgaup/pub/Teaching_Material=
s/current/Tutorials/HAL_tutorial.pdf
Because I suspect the Hardware Application Layer is inconsitent
It says: HAL has a dir called drivers with src and inc directories.


Test 11: Reinstall Intel FPGA Monitor Program - because I had a lot of proj=
ects. Wanted to see if I can find the src and inc.
Couldn't find them.

Test 12: Started a new project in "Intel FPGA Monitor Program" : Type: "Pro=
gram with Device Driver Support" , include sample program with the project:=
 I chose "Video". ( a part ? of the) Project was created in C:\intelFPGA_li=
te\18.1\.
I saw a folder called BSP.
It included the mentioned directories driver with src & inc.

Test 13: compared ( just a little bit ) the found BSP directory with the "N=
ios II Software Build Tools for Eclipse (Quartus Prime 18.1)" created BSP. =
This tool helped me:
https://www.diffchecker.com/diff
There are little differences !

Test 14: download .sof in FPGA. Create "Nios II Software Build Tools for Ec=
lipse (Quartus Prime 18.1)" - new project with BSP.
Result: Random pixel ..

Test 15: Copied content of BSP directory into the "Nios II Software Build T=
ools for Eclipse (Quartus Prime 18.1)" BSP file.
Result it worked. Chose NIOS instance 0. byte stream device instance id 0

Test 16: Test 14 & 15 again to see if it was just luck.=20
Result: it worked again. HAL has maybe some problems...

Test 17: took a manipulated video.c ( sample file )to test some hdmi output
Result worked fine like I would do it with "Intel FPGA Monitor Program"

Hmm. now this is nice. But in some days I want to add a input device to my =
Hardware system. So the HAL also need to change... but I am really unsure a=
t the moment, because it only works with the example HAL xD







Article: 161681
Subject: No more gate-level simulation. for Cyclone V !!!
From: Luis Cupido <cupido@ua.pt>
Date: Thu, 2 Apr 2020 12:50:51 +0100
Links: << >>  << T >>  << A >>
Hello,

I'm dealing with some fast state machines and gate-level-timing 
simulation of some components has been very helpful.

Now using a Cyclone V I found that I can't do timing gate-level timing 
simulation, quartus does not generate the SDF file, the *.sdo file.

Altera/Intel says this on the documentation regarding simulation:

"Gate-level timing simulation is supported only for the Arria II 
GX/GZ,Cyclone IV, MAXII, MAX V, and Stratix IV device families.
Use Timing Analyzer static timing analysis rather than gate-level timing 
simulation."

I don't even know how to interpret the last sentence, how
does the Timing analyzer give me the same info / graphical view of the
timings of the critical signals and buses that I'm trying to view ?

Any help ?
Thanks.

Luis C.

Article: 161682
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: KJ <kkjennings@sbcglobal.net>
Date: Fri, 3 Apr 2020 09:34:03 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Thursday, April 2, 2020 at 7:50:55 AM UTC-4, Luis Cupido wrote:
> Use Timing Analyzer static timing analysis rather than gate-level timing 
> simulation."
> 
> I don't even know how to interpret the last sentence, how
> does the Timing analyzer give me the same info / graphical view of the
> timings of the critical signals and buses that I'm trying to view ?
> 

Static timing analysis does not give you the same info as a gate level simulation.  Static timing analysis is a far better way to verify your design for correctness compared with gate level sim.

Kevin Jennings

Article: 161683
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Fri, 3 Apr 2020 10:34:02 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, April 3, 2020 at 12:34:09 PM UTC-4, KJ wrote:
> On Thursday, April 2, 2020 at 7:50:55 AM UTC-4, Luis Cupido wrote:
> > Use Timing Analyzer static timing analysis rather than gate-level timing 
> > simulation."
> > 
> > I don't even know how to interpret the last sentence, how
> > does the Timing analyzer give me the same info / graphical view of the
> > timings of the critical signals and buses that I'm trying to view ?
> > 
> 
> Static timing analysis does not give you the same info as a gate level simulation.  Static timing analysis is a far better way to verify your design for correctness compared with gate level sim.

One big problem with static timing analysis is that it doesn't have a means of verifying the timing constraints.  You analyze your design, construct the timing constraints, but there is no way to even check for typing errors.  

So if any of the constraints are too lax, your design will pass analysis, but can fail on the bench and there is no way to find the bug.  Then a timing simulation can save the day.  

But in general, static timing analysis is a much more comprehensive way to verify timing.  It is not so easy to get information on how to fix the problem, but at least it points you to the problem. 

-- 

  Rick C.

  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161684
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: HT-Lab <hans64@htminuslab.com>
Date: Fri, 3 Apr 2020 20:49:30 +0100
Links: << >>  << T >>  << A >>
On 03/04/2020 17:34, KJ wrote:
> On Thursday, April 2, 2020 at 7:50:55 AM UTC-4, Luis Cupido wrote:
>> Use Timing Analyzer static timing analysis rather than gate-level timing
>> simulation."
>>
>> I don't even know how to interpret the last sentence, how
>> does the Timing analyzer give me the same info / graphical view of the
>> timings of the critical signals and buses that I'm trying to view ?
>>
> 
> Static timing analysis does not give you the same info as a gate level simulation.  Static timing analysis is a far better way to verify your design for correctness compared with gate level sim.
> 
> Kevin Jennings

That is not the issue, the issue is that Intel has decided to remove one 
of the tools we use to check our design.

Obviously if static timing fails nobody will run gatelevel simulation, 
however, if static passes and your design does not work (has happen to 
me and I am sure to others) then gatelevel simulation provide a valuable 
and easy to method to see where signals are "going red".  Gatelevel is 
also useful to see what happens before reset is asserted.

Given that all other vendors provide gatelevel timing I failed to see 
why Intel in their great wisdom  has decided to remove this feature for 
one(?) of its family.

Hans
www.ht-lab.com

Article: 161685
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Fri, 3 Apr 2020 13:01:40 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, April 3, 2020 at 3:49:34 PM UTC-4, HT-Lab wrote:
> On 03/04/2020 17:34, KJ wrote:
> > On Thursday, April 2, 2020 at 7:50:55 AM UTC-4, Luis Cupido wrote:
> >> Use Timing Analyzer static timing analysis rather than gate-level timing
> >> simulation."
> >>
> >> I don't even know how to interpret the last sentence, how
> >> does the Timing analyzer give me the same info / graphical view of the
> >> timings of the critical signals and buses that I'm trying to view ?
> >>
> > 
> > Static timing analysis does not give you the same info as a gate level simulation.  Static timing analysis is a far better way to verify your design for correctness compared with gate level sim.
> > 
> > Kevin Jennings
> 
> That is not the issue, the issue is that Intel has decided to remove one 
> of the tools we use to check our design.
> 
> Obviously if static timing fails nobody will run gatelevel simulation, 
> however, if static passes and your design does not work (has happen to 
> me and I am sure to others) then gatelevel simulation provide a valuable 
> and easy to method to see where signals are "going red".  Gatelevel is 
> also useful to see what happens before reset is asserted.
> 
> Given that all other vendors provide gatelevel timing I failed to see 
> why Intel in their great wisdom  has decided to remove this feature for 
> one(?) of its family.

Maybe it's just me, but I hate saying "Intel" rather than Altera.  :( 

-- 

  Rick C.

  + Get 2,000 miles of free Supercharging
  + Tesla referral code - https://ts.la/richard11209

Article: 161686
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: HT-Lab <hans64@htminuslab.com>
Date: Sat, 4 Apr 2020 09:41:39 +0100
Links: << >>  << T >>  << A >>
On 03/04/2020 21:01, Rick C wrote:
> On Friday, April 3, 2020 at 3:49:34 PM UTC-4, HT-Lab wrote:
>> On 03/04/2020 17:34, KJ wrote:
>>> On Thursday, April 2, 2020 at 7:50:55 AM UTC-4, Luis Cupido wrote:
>>>> Use Timing Analyzer static timing analysis rather than gate-level timing
>>>> simulation."
>>>>
>>>> I don't even know how to interpret the last sentence, how
>>>> does the Timing analyzer give me the same info / graphical view of the
>>>> timings of the critical signals and buses that I'm trying to view ?
>>>>
>>>
>>> Static timing analysis does not give you the same info as a gate level simulation.  Static timing analysis is a far better way to verify your design for correctness compared with gate level sim.
>>>
>>> Kevin Jennings
>>
>> That is not the issue, the issue is that Intel has decided to remove one
>> of the tools we use to check our design.
>>
>> Obviously if static timing fails nobody will run gatelevel simulation,
>> however, if static passes and your design does not work (has happen to
>> me and I am sure to others) then gatelevel simulation provide a valuable
>> and easy to method to see where signals are "going red".  Gatelevel is
>> also useful to see what happens before reset is asserted.
>>
>> Given that all other vendors provide gatelevel timing I failed to see
>> why Intel in their great wisdom  has decided to remove this feature for
>> one(?) of its family.
> 
> Maybe it's just me, but I hate saying "Intel" rather than Altera.  :(
> 
I have the same, occasionally I still say Actel instead of 
MicroSemi...eh...MicroChip. I don't think we have to take this so 
seriously as long after MicroSemi bought Actel you could still find 
"Actel" all over their documentation.

Hans
www.ht-lab.com


Article: 161687
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: pault.eg@googlemail.com
Date: Wed, 15 Apr 2020 02:58:25 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, April 3, 2020 at 8:49:34 PM UTC+1, HT-Lab wrote:

>=20
> Given that all other vendors provide gatelevel timing I failed to see=20
> why Intel in their great wisdom  has decided to remove this feature for=
=20
> one(?) of its family.
>=20

I came across this a few years ago and I seem to remember that it was all f=
amilies from Cyclone V onwards so not just one family. Don't know if this h=
as changed since then.=20

Also from reading docs at that time Altera docs indicated the reason for no=
t supporting timing sim was because designs were getting so complex timing =
sim takes too long to be viable and so would be dropped. Interestingly when=
 I checked the Xilinx docs at that time timing simulation was still support=
ed and their docs indicated because designs where becoming more complex tha=
t timing sim was recommended!! The polar opposite to Altera.

As I said that was a few years ago. I was working on a project where we had=
 to do timing sim because it was part of the development process. That proj=
ect switched to Xilinx devices just because of this issue.

I haven't needed to do timing sim since so I don't know if much has changed=
 since then.=20

Article: 161688
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: gtwrek@sonic.net (gtwrek)
Date: Wed, 15 Apr 2020 17:06:48 -0000 (UTC)
Links: << >>  << T >>  << A >>
In article <3a4c3990-b99a-436f-b083-c37b9431651c@googlegroups.com>,
 <pault.eg@googlemail.com> wrote:
>On Friday, April 3, 2020 at 8:49:34 PM UTC+1, HT-Lab wrote:
>
>> 
>> Given that all other vendors provide gatelevel timing I failed to see 
>> why Intel in their great wisdom  has decided to remove this feature for 
>> one(?) of its family.
>> 
>
>I came across this a few years ago and I seem to remember that it was all 
>families from Cyclone V onwards so not just one family. Don't know if 
>this has changed since then. 
>
>Also from reading docs at that time Altera docs indicated the reason for
>not supporting timing sim was because designs were getting so complex 
>timing sim takes too long to be viable and so would be dropped.
>Interestingly when I checked the Xilinx docs at that time timing 
>simulation was still supported and their docs indicated because designs
>where becoming more complex that timing sim was recommended!! The polar
>opposite to Altera.
>
>As I said that was a few years ago. I was working on a project where we 
>had to do timing sim because it was part of the development process. 
>That project switched to Xilinx devices just because of this issue.
>
>I haven't needed to do timing sim since so I don't know if much has 
>changed since then. 

That's interesting the Altera took that tact.  I wonder how much effort
it really is to maintain the SDF generation - surely it's a solved
problem?

On the other hand, I do find the need for SDF annotating, gate-level
simulations as a waste of time, with better ways of accomplishing
the same goals.  I've not run a full timing simulation in probably
over 20 years...  In fact, even the need for gate-level simulations
at all, is remote - like once a year or so - and these are almost
universally to confirm/debug a vendor RTL/netlist mismatch...

So, I sortof understand Altera's thought process.  But I do know
there's certainly a vocal group of folks who insist gate-level sims
(with or without timing) is a sign-off requirement.  And Altera 
brushing off those folks doesn't seem like a wise move.. 

Regards,
Mark




Article: 161689
Subject: CPU Softcore Compendium
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Thu, 16 Apr 2020 09:14:16 -0700 (PDT)
Links: << >>  << T >>  << A >>
Some time ago a link was posted here to a very comprehensive list of soft C=
PU designs which included LUT counts, clock rates, instructions per clock a=
nd a performance metric incorporating all three.  I don't recall the author=
's name, but it was amazingly complete. =20

Anyone remember that?  Still got the link?=20

--=20

  Rick C.

  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161690
Subject: Re: No more gate-level simulation. for Cyclone V !!!
From: KJ <kkjennings@sbcglobal.net>
Date: Thu, 16 Apr 2020 11:12:11 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, April 15, 2020 at 1:06:52 PM UTC-4, gtwrek wrote:
> In article <3a4c3990-b99a-436f-b083-c37b9431651c@googlegroups.com>,
>  <pault.eg@googlemail.com> wrote:
> >On Friday, April 3, 2020 at 8:49:34 PM UTC+1, HT-Lab wrote:
>=20
> On the other hand, I do find the need for SDF annotating, gate-level
> simulations as a waste of time, with better ways of accomplishing
> the same goals.  I've not run a full timing simulation in probably
> over 20 years...  In fact, even the need for gate-level simulations
> at all, is remote - like once a year or so - and these are almost
> universally to confirm/debug a vendor RTL/netlist mismatch...
>=20
Agreed.  In fact, the last couple of times I remember running a gate level =
sim was to show a demonstrate an incorrect synthesis implementation...to Al=
tera.  But to do that I only had to run the sim for a couple of clock cycle=
s or so.  In at least one of the cases, I modified the design to bring out =
an external signal to a pin so that it could be directly observed to be inc=
orrect.  The testbench was an instantiation of the source design in paralle=
l with the gate level sim.

Altera is correct from the viewpoint that running a gate level sim as a sub=
stitute for timing analysis is a waste of time and resources.  However, fro=
m the viewpoint of having a way to validate correct synthesis (whether this=
 is done routinely as part of company's design process or is an exception c=
ase) the gate level sim has usage.

Kevin Jennings

Article: 161691
Subject: Re: CPU Softcore Compendium
From: jim.brakefield@ieee.org
Date: Thu, 16 Apr 2020 12:30:35 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Thursday, April 16, 2020 at 11:14:19 AM UTC-5, Rick C wrote:
> Some time ago a link was posted here to a very comprehensive list of soft=
 CPU designs which included LUT counts, clock rates, instructions per clock=
 and a performance metric incorporating all three.  I don't recall the auth=
or's name, but it was amazingly complete. =20
>=20
> Anyone remember that?  Still got the link?=20
>=20
> --=20
>=20
>   Rick C.
>=20
>   - Get 1,000 miles of free Supercharging
>   - Tesla referral code - https://ts.la/richard11209

https://opencores.org/projects/up_core_list/summary

Several legacy processors are listed:
https://opencores.org/projects/up_core_list/downloads
uP_core_list_by_style-clone190221.pdf

Also look into MISTer as it supports several legacy systems.
None are competitive speed wise with high performance uP.

With LUTs costing less than $0.001 each some soft core uPs are inexpensive,=
 free if you have unused LUTs and block RAMs.
For debug, changing block RAM contents is much faster than rerunning the FP=
GA design.

Article: 161692
Subject: Re: Custom CPU Designs
From: Theo <theom+news@chiark.greenend.org.uk>
Date: 17 Apr 2020 17:18:29 +0100 (BST)
Links: << >>  << T >>  << A >>
Grant Edwards <invalid@invalid.invalid> wrote:
> Once I got a UART working so I count print messages, I just gave up on
> the JTAG BS.  Another interesting quirk was that the Altera USB JTAG
> interface only worked right with a few specific models of powered USB
> hubs.

I've spent months working around such problems :(
We have an application that pushes gigabytes through JTAG UARTs and have
learnt all about it...

There's a pile of specific issues:

- the USB 1.1 JTAG is an FT245 chip which basically bitbangs JTAG; it sends
a byte containing 4 bits for the 4 JTAG wires.  The software is literally
saying "clock high, clock low, clock high, clock low" etc.  Timing of that
is not reliable.  Newer development boards have a USB 2.0 programmer
where things are a bit better here, but it's still bitbanging.

- being USB 1.1, if you have a cheap USB 2.0 hub it may only support USB SST
which means all USB 1.1 peripherals share 12Mbps of bandwidth.  In our case
we have 16 FPGAs all trying to chat using that shared 12Mbps bandwidth. 
Starvation occurs and nobody makes any progress.
A better hub with MST will allow multiple 12Mbps streams to share the
480Mbps USB 2.0 bandwidth.  Unfortunately when you buy a hub this is never
advertised or explained. 

- The software daemon that generates the bitbanging data is called jtagd and
it's single threaded.  It can max out a CPU core bitbanging, and that can
lead to unreliability.  I had an Atom where it was unusable.  I now install
i7s in servers with FPGAs, purely to push bits down the JTAG wire.

- To parallelise downloads to multiple FPGAs, I've written some horrible
containerisation scripts that lie to each jtagd there's only one FPGA in
tte system.  Then I can launch 16 jtagds and use all 16 cores in my system
to push traffic through the JTAG UARTs

- Did I mention that programming an FPGA takes about 700MB?  So I need to
fit at least 8GB of RAM to avoid memory starvation when doing parallel
programming (if the system swaps the bitbanging stalls and the FPGA
programming fails)

- there's some troubles with jtagd and libudev.so.0 - if you don't have it
things seem to work but get unreliable.  I just symlink libudev.so.1 on
Ubuntu and it seems to fix it.

- the register-level interface of the JTAG UART isn't able to read the state
of the input FIFO without also dequeuing the data on it.  Writing
reliable device drivers is almost impossible.  I have a version that wraps
the UART in a 16550 register interface to avoid this problem.

- if the FPGA is failing timing, the producer/consumer of the UART can break
in interesting ways, which look a lot like there's some problem with the USB
hub or similar.


It's a very precarious pile of hardware and software that falls over in
numerous ways if pushed at all hard :(

Theo
[adding comp.arch.fpga since this is relevant to those folks]

Article: 161693
Subject: Re: Custom CPU Designs
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Fri, 17 Apr 2020 10:35:01 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, April 17, 2020 at 12:18:35 PM UTC-4, Theo wrote:
> Grant Edwards <invalid@invalid.invalid> wrote:
> > Once I got a UART working so I count print messages, I just gave up on
> > the JTAG BS.  Another interesting quirk was that the Altera USB JTAG
> > interface only worked right with a few specific models of powered USB
> > hubs.
> 
> I've spent months working around such problems :(
> We have an application that pushes gigabytes through JTAG UARTs and have
> learnt all about it...
> 
> There's a pile of specific issues:
> 
> - the USB 1.1 JTAG is an FT245 chip which basically bitbangs JTAG; it sends
> a byte containing 4 bits for the 4 JTAG wires.  The software is literally
> saying "clock high, clock low, clock high, clock low" etc.  Timing of that
> is not reliable.  Newer development boards have a USB 2.0 programmer
> where things are a bit better here, but it's still bitbanging.
> 
> - being USB 1.1, if you have a cheap USB 2.0 hub it may only support USB SST
> which means all USB 1.1 peripherals share 12Mbps of bandwidth.  In our case
> we have 16 FPGAs all trying to chat using that shared 12Mbps bandwidth. 
> Starvation occurs and nobody makes any progress.
> A better hub with MST will allow multiple 12Mbps streams to share the
> 480Mbps USB 2.0 bandwidth.  Unfortunately when you buy a hub this is never
> advertised or explained. 
> 
> - The software daemon that generates the bitbanging data is called jtagd and
> it's single threaded.  It can max out a CPU core bitbanging, and that can
> lead to unreliability.  I had an Atom where it was unusable.  I now install
> i7s in servers with FPGAs, purely to push bits down the JTAG wire.
> 
> - To parallelise downloads to multiple FPGAs, I've written some horrible
> containerisation scripts that lie to each jtagd there's only one FPGA in
> tte system.  Then I can launch 16 jtagds and use all 16 cores in my system
> to push traffic through the JTAG UARTs
> 
> - Did I mention that programming an FPGA takes about 700MB?  So I need to
> fit at least 8GB of RAM to avoid memory starvation when doing parallel
> programming (if the system swaps the bitbanging stalls and the FPGA
> programming fails)
> 
> - there's some troubles with jtagd and libudev.so.0 - if you don't have it
> things seem to work but get unreliable.  I just symlink libudev.so.1 on
> Ubuntu and it seems to fix it.
> 
> - the register-level interface of the JTAG UART isn't able to read the state
> of the input FIFO without also dequeuing the data on it.  Writing
> reliable device drivers is almost impossible.  I have a version that wraps
> the UART in a 16550 register interface to avoid this problem.
> 
> - if the FPGA is failing timing, the producer/consumer of the UART can break
> in interesting ways, which look a lot like there's some problem with the USB
> hub or similar.
> 
> 
> It's a very precarious pile of hardware and software that falls over in
> numerous ways if pushed at all hard :(
> 
> Theo
> [adding comp.arch.fpga since this is relevant to those folks]

I guess once your design becomes complex enough it isn't so practical to debug it in the HDL simulator.  Eh? 

-- 

  Rick C.

  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161694
Subject: Re: Custom CPU Designs
From: Theo <theom+news@chiark.greenend.org.uk>
Date: 17 Apr 2020 21:19:44 +0100 (BST)
Links: << >>  << T >>  << A >>
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
> I guess once your design becomes complex enough it isn't so practical to debug it in the HDL simulator.  Eh? 

We have boxes of 16 and a rack of 80 FPGAs, and this is used for data
onload/offload not debugging.  So the simulator won't do ;-P

Theo

Article: 161695
Subject: Re: Custom CPU Designs
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Fri, 17 Apr 2020 14:00:11 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, April 17, 2020 at 4:19:49 PM UTC-4, Theo wrote:
> Rick C <gnuarm.deletethisbit@gmail.com> wrote:
> > I guess once your design becomes complex enough it isn't so practical to debug it in the HDL simulator.  Eh? 
> 
> We have boxes of 16 and a rack of 80 FPGAs, and this is used for data
> onload/offload not debugging.  So the simulator won't do ;-P
> 
> Theo

Have you thought of putting it all into one really big FPGA?  8-o

-- 

  Rick C.

  + Get 1,000 miles of free Supercharging
  + Tesla referral code - https://ts.la/richard11209

Article: 161696
Subject: Passing digitized data to design
From: Mohammed Billoo <mohammed.billoo@gmail.com>
Date: Tue, 5 May 2020 20:36:19 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hello,

Is there a resource that can help me understand how to pass digitized data =
(from a waveform) to a design that I have for verification? I'm getting int=
o FPGA development and have created a simple filter. I wanted to test it ou=
t on audio data that I can generate and see that the filter actually works,=
 but I haven't found a way to actually "pass" data to a design.

Thanks
Mohammed

Article: 161697
Subject: Re: Passing digitized data to design
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Wed, 6 May 2020 00:07:28 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, May 5, 2020 at 11:36:23 PM UTC-4, Mohammed Billoo wrote:
> Hello,
>=20
> Is there a resource that can help me understand how to pass digitized dat=
a (from a waveform) to a design that I have for verification? I'm getting i=
nto FPGA development and have created a simple filter. I wanted to test it =
out on audio data that I can generate and see that the filter actually work=
s, but I haven't found a way to actually "pass" data to a design.
>=20
> Thanks
> Mohammed

Do you mean a "design" in a simulation or in an FPGA?  In an FPGA I would e=
xpect your system to already be capable of sending data to it.  If not, how=
 do you plan to use the design? =20

In a simulation you need to have the data in a file which can be read out b=
y a test bench and provided to the simulated FPGA by simulating the interfa=
ces to the FPGA. =20

I typically spend as much effort on the test benches for my designs as I do=
 the designs themselves.=20

--=20

  Rick C.

  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161698
Subject: Re: Passing digitized data to design
From: Mohammed Billoo <mohammed.billoo@gmail.com>
Date: Wed, 6 May 2020 06:54:42 -0700 (PDT)
Links: << >>  << T >>  << A >>
Sorry, yes I meant in simulation. I imagine there are many good resources o=
nline that show how to set up a testbench for this purpose in Vivado.

Thanks

On Wednesday, May 6, 2020 at 3:07:32 AM UTC-4, Rick C wrote:
> On Tuesday, May 5, 2020 at 11:36:23 PM UTC-4, Mohammed Billoo wrote:
> > Hello,
> >=20
> > Is there a resource that can help me understand how to pass digitized d=
ata (from a waveform) to a design that I have for verification? I'm getting=
 into FPGA development and have created a simple filter. I wanted to test i=
t out on audio data that I can generate and see that the filter actually wo=
rks, but I haven't found a way to actually "pass" data to a design.
> >=20
> > Thanks
> > Mohammed
>=20
> Do you mean a "design" in a simulation or in an FPGA?  In an FPGA I would=
 expect your system to already be capable of sending data to it.  If not, h=
ow do you plan to use the design? =20
>=20
> In a simulation you need to have the data in a file which can be read out=
 by a test bench and provided to the simulated FPGA by simulating the inter=
faces to the FPGA. =20
>=20
> I typically spend as much effort on the test benches for my designs as I =
do the designs themselves.=20
>=20
> --=20
>=20
>   Rick C.
>=20
>   - Get 1,000 miles of free Supercharging
>   - Tesla referral code - https://ts.la/richard11209


Article: 161699
Subject: Re: Passing digitized data to design
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Wed, 6 May 2020 08:31:12 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, May 6, 2020 at 9:54:45 AM UTC-4, Mohammed Billoo wrote:
> Sorry, yes I meant in simulation. I imagine there are many good resources online that show how to set up a testbench for this purpose in Vivado.

I assume Vivado is a simulation tool?  That is agnostic to the issue.  You simply need to learn how to use the HDL you are using.  Once you know that you can write the test bench to operate the other side of the interface from the FPGA.  

What sort of interfaces do you have?  

-- 

  Rick C.

  + Get 1,000 miles of free Supercharging
  + Tesla referral code - https://ts.la/richard11209



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search