Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 160225

Article: 160225
Subject: Re: sram
From: rickman <gnuarm@gmail.com>
Date: Wed, 9 Aug 2017 22:33:40 -0400
Links: << >>  << T >>  << A >>
brimdavis@gmail.com wrote on 8/8/2017 8:37 PM:
> KJ wrote:
>>
>> It's even easier than that to synchronously control a standard async SRAM.
>> Simply connect WE to the clock and hold OE active all the time except
>> for cycles where you want to write something new into the SRAM.
>>
> As has been explained to you in detail by several other posters, your  method is not 'easier' with modern FPGA's and SRAMs.
>
> The simplest way to get a high speed clock {gated or not} off the chip, coincident with other registered I/O signals, is to use the dual-edge IOB flip-flops as I suggested.
>
> The DDR technique I mentioned would run synchronous single-cycle read or write cycles at 50 MHz on a Spartan-3 Starter kit with an (IIRC) 10 ns SRAM, 66 MHz if using a duty-cycle-skewed clock to meet the WE pulse width requirements.
>
>  Another advantage of the 'forwarding' method is that one can use the internal FPGA clock resources for clock multiply/divides etc. without needing to also manage the board-level low-skew clock distribution needed by your method.

I can't say I follow what you are proposing.  How do you get the clock out 
of the FPGA with a defined time relationship to the signals clocked through 
the IOB?  Is this done with feedback from the output clock using the 
internal clocking circuits?

-- 

Rick C

Article: 160226
Subject: Re: sram
From: Allan Herriman <allanherriman@hotmail.com>
Date: 10 Aug 2017 06:02:52 GMT
Links: << >>  << T >>  << A >>
On Wed, 09 Aug 2017 22:33:40 -0400, rickman wrote:

> brimdavis@gmail.com wrote on 8/8/2017 8:37 PM:
>> KJ wrote:
>>>
>>> It's even easier than that to synchronously control a standard async
>>> SRAM.
>>> Simply connect WE to the clock and hold OE active all the time except
>>> for cycles where you want to write something new into the SRAM.
>>>
>> As has been explained to you in detail by several other posters, your 
>> method is not 'easier' with modern FPGA's and SRAMs.
>>
>> The simplest way to get a high speed clock {gated or not} off the chip,
>> coincident with other registered I/O signals, is to use the dual-edge
>> IOB flip-flops as I suggested.
>>
>> The DDR technique I mentioned would run synchronous single-cycle read
>> or write cycles at 50 MHz on a Spartan-3 Starter kit with an (IIRC) 10
>> ns SRAM, 66 MHz if using a duty-cycle-skewed clock to meet the WE pulse
>> width requirements.
>>
>>  Another advantage of the 'forwarding' method is that one can use the
>>  internal FPGA clock resources for clock multiply/divides etc. without
>>  needing to also manage the board-level low-skew clock distribution
>>  needed by your method.
> 
> I can't say I follow what you are proposing.  How do you get the clock
> out of the FPGA with a defined time relationship to the signals clocked
> through the IOB?  Is this done with feedback from the output clock using
> the internal clocking circuits?


About a decade back, mainstream FPGAs gained greatly expanded IOB 
clocking abilities to support DDR RAM (and other interfaces such as 
RGMII).
In particular, one can forward a clock out of an FPGA pin phase aligned 
with data on other pins.  You can also use one of the internal PLLs to 
generate phase shifted clocks, and thus have a phase shift on the pins 
between two data signals or between the clock and the data signals.

This can be done without needing feedback from the pins.


You should try reading a datasheet occasionally - they can be very 
informative.
Just in case someone has blocked Google where you are: here's an example:
https://www.xilinx.com/support/documentation/user_guides/ug571-ultrascale-
selectio.pdf

Allan

Article: 160227
Subject: Re: sram
From: brimdavis@gmail.com
Date: Thu, 10 Aug 2017 16:46:13 -0700 (PDT)
Links: << >>  << T >>  << A >>
rickman wrote:
>
> I can't say I follow what you are proposing.  How do you get
> the clock out of the FPGA with a defined time relationship 
> to the signals clocked through the IOB?
>

The links I gave in my original post explain the technique:
>>
>> I posted some notes on this technique (for a Spartan-3) to the fpga-cpu group >> many years ago:
>>   https://groups.yahoo.com/neo/groups/fpga-cpu/conversations/messages/2076
>>   https://groups.yahoo.com/neo/groups/fpga-cpu/conversations/messages/2177
>>

Allan Herriman wrote:
> 
> About a decade back, mainstream FPGAs gained greatly expanded IOB 
> clocking abilities to support DDR RAM (and other interfaces such as 
> RGMII).
>

Nearly twenty years now!

Xilinx parts had ODDR equivalents in Virtex-E using hard macros; then the actual ODDR primitive stuff appeared in Virtex-2.

-Brian

Article: 160228
Subject: Re: sram
From: rickman <gnuarm@gmail.com>
Date: Thu, 10 Aug 2017 22:39:39 -0400
Links: << >>  << T >>  << A >>
Allan Herriman wrote on 8/10/2017 2:02 AM:
> On Wed, 09 Aug 2017 22:33:40 -0400, rickman wrote:
>
>> brimdavis@gmail.com wrote on 8/8/2017 8:37 PM:
>>> KJ wrote:
>>>>
>>>> It's even easier than that to synchronously control a standard async
>>>> SRAM.
>>>> Simply connect WE to the clock and hold OE active all the time except
>>>> for cycles where you want to write something new into the SRAM.
>>>>
>>> As has been explained to you in detail by several other posters, your
>>> method is not 'easier' with modern FPGA's and SRAMs.
>>>
>>> The simplest way to get a high speed clock {gated or not} off the chip,
>>> coincident with other registered I/O signals, is to use the dual-edge
>>> IOB flip-flops as I suggested.
>>>
>>> The DDR technique I mentioned would run synchronous single-cycle read
>>> or write cycles at 50 MHz on a Spartan-3 Starter kit with an (IIRC) 10
>>> ns SRAM, 66 MHz if using a duty-cycle-skewed clock to meet the WE pulse
>>> width requirements.
>>>
>>>  Another advantage of the 'forwarding' method is that one can use the
>>>  internal FPGA clock resources for clock multiply/divides etc. without
>>>  needing to also manage the board-level low-skew clock distribution
>>>  needed by your method.
>>
>> I can't say I follow what you are proposing.  How do you get the clock
>> out of the FPGA with a defined time relationship to the signals clocked
>> through the IOB?  Is this done with feedback from the output clock using
>> the internal clocking circuits?
>
>
> About a decade back, mainstream FPGAs gained greatly expanded IOB
> clocking abilities to support DDR RAM (and other interfaces such as
> RGMII).
> In particular, one can forward a clock out of an FPGA pin phase aligned
> with data on other pins.  You can also use one of the internal PLLs to
> generate phase shifted clocks, and thus have a phase shift on the pins
> between two data signals or between the clock and the data signals.
>
> This can be done without needing feedback from the pins.
>
>
> You should try reading a datasheet occasionally - they can be very
> informative.
> Just in case someone has blocked Google where you are: here's an example:
> https://www.xilinx.com/support/documentation/user_guides/ug571-ultrascale-
> selectio.pdf

Thank you for the link to the 356 page document.  No, I have not researched 
how every brand of FPGA implements DDR interfaces mostly because I have not 
designed a DDR memory interface in an FPGA.  I did look at the document and 
didn't find info on how the timing delays through the IOB might be 
synchronized with the output clock.

So how exactly does the tight alignment of a clock exiting a Xilinx FPGA 
maintain alignment with data exiting the FPGA over time and differential 
temperature?  What will the timing relationship be and how tightly can it be 
maintained?

Just waving your hands and saying things can be aligned doesn't explain how 
it works.  This is a discussion.  If you aren't interested in discussing, 
then please don't bother to reply.

-- 

Rick C

Article: 160229
Subject: Re: sram
From: rickman <gnuarm@gmail.com>
Date: Thu, 10 Aug 2017 22:58:15 -0400
Links: << >>  << T >>  << A >>
brimdavis@gmail.com wrote on 8/10/2017 7:46 PM:
> rickman wrote:
>>
>> I can't say I follow what you are proposing.  How do you get
>> the clock out of the FPGA with a defined time relationship
>> to the signals clocked through the IOB?
>>
>
> The links I gave in my original post explain the technique:
>>>
>>> I posted some notes on this technique (for a Spartan-3) to the fpga-cpu group >> many years ago:
>>>   https://groups.yahoo.com/neo/groups/fpga-cpu/conversations/messages/2076
>>>   https://groups.yahoo.com/neo/groups/fpga-cpu/conversations/messages/2177
>>>

I haven't used a Xilinx part in at something like 15 years.  So I don't 
recall all the details.  I don't follow how you achieve the timing margin 
needed between the address,  control and data signals which are passing 
through the IOB and the WE signal pulse is being generated in the IOB DDR. 
Even with a hold time requirement of 0 ns something has to be done to 
prevent a race condition.  Your posts seem to say you used different drive 
strengths to use the trace capacitance to create different delays in signal 
timing.  If you can't use a data sheet to produce a timing analysis, it 
would seem to be a fairly sketchy method that you can't count on to work 
under all conditions.  I suppose you could qualify the circuit over 
temperature and voltage and then make some assumptions about process 
variability, but as I say, sketchy.


-- 

Rick C

Article: 160230
Subject: Re: sram
From: Richard Damon <Richard@Damon-Family.org>
Date: Fri, 11 Aug 2017 00:09:57 -0400
Links: << >>  << T >>  << A >>
On 8/10/17 10:39 PM, rickman wrote:
> Allan Herriman wrote on 8/10/2017 2:02 AM:
>> On Wed, 09 Aug 2017 22:33:40 -0400, rickman wrote:
>>
>>> brimdavis@gmail.com wrote on 8/8/2017 8:37 PM:
>>>> KJ wrote:
>>>>>
>>>>> It's even easier than that to synchronously control a standard async
>>>>> SRAM.
>>>>> Simply connect WE to the clock and hold OE active all the time except
>>>>> for cycles where you want to write something new into the SRAM.
>>>>>
>>>> As has been explained to you in detail by several other posters, your
>>>> method is not 'easier' with modern FPGA's and SRAMs.
>>>>
>>>> The simplest way to get a high speed clock {gated or not} off the chip,
>>>> coincident with other registered I/O signals, is to use the dual-edge
>>>> IOB flip-flops as I suggested.
>>>>
>>>> The DDR technique I mentioned would run synchronous single-cycle read
>>>> or write cycles at 50 MHz on a Spartan-3 Starter kit with an (IIRC) 10
>>>> ns SRAM, 66 MHz if using a duty-cycle-skewed clock to meet the WE pulse
>>>> width requirements.
>>>>
>>>>  Another advantage of the 'forwarding' method is that one can use the
>>>>  internal FPGA clock resources for clock multiply/divides etc. without
>>>>  needing to also manage the board-level low-skew clock distribution
>>>>  needed by your method.
>>>
>>> I can't say I follow what you are proposing.  How do you get the clock
>>> out of the FPGA with a defined time relationship to the signals clocked
>>> through the IOB?  Is this done with feedback from the output clock using
>>> the internal clocking circuits?
>>
>>
>> About a decade back, mainstream FPGAs gained greatly expanded IOB
>> clocking abilities to support DDR RAM (and other interfaces such as
>> RGMII).
>> In particular, one can forward a clock out of an FPGA pin phase aligned
>> with data on other pins.  You can also use one of the internal PLLs to
>> generate phase shifted clocks, and thus have a phase shift on the pins
>> between two data signals or between the clock and the data signals.
>>
>> This can be done without needing feedback from the pins.
>>
>>
>> You should try reading a datasheet occasionally - they can be very
>> informative.
>> Just in case someone has blocked Google where you are: here's an example:
>> https://www.xilinx.com/support/documentation/user_guides/ug571-ultrascale- 
>>
>> selectio.pdf
> 
> Thank you for the link to the 356 page document.  No, I have not 
> researched how every brand of FPGA implements DDR interfaces mostly 
> because I have not designed a DDR memory interface in an FPGA.  I did 
> look at the document and didn't find info on how the timing delays 
> through the IOB might be synchronized with the output clock.
> 
> So how exactly does the tight alignment of a clock exiting a Xilinx FPGA 
> maintain alignment with data exiting the FPGA over time and differential 
> temperature?  What will the timing relationship be and how tightly can 
> it be maintained?
> 
> Just waving your hands and saying things can be aligned doesn't explain 
> how it works.  This is a discussion.  If you aren't interested in 
> discussing, then please don't bother to reply.
> 

Thinking about it, YES, FPGAs normally have a few pins that can be 
configured as dedicated clock drivers, and it will generally be 
guaranteed that if those pins are driving out a global clock, then any 
other pin with output clocked by that clock will change so as to have a 
known hold time (over specified operating conditions). This being the 
way to run a typical synchronous interface.

Since this method requires the WE signal to be the clock, you need to 
find a part that has either a write mask signal, or perhaps is 
multi-ported so this port could be dedicated to writes and another port 
could be used to read what is needed (the original part for this thread 
wouldn't be usable with this method).

Article: 160231
Subject: Re: sram
From: rickman <gnuarm@gmail.com>
Date: Fri, 11 Aug 2017 03:04:47 -0400
Links: << >>  << T >>  << A >>
Richard Damon wrote on 8/11/2017 12:09 AM:
> On 8/10/17 10:39 PM, rickman wrote:
>> Allan Herriman wrote on 8/10/2017 2:02 AM:
>>> On Wed, 09 Aug 2017 22:33:40 -0400, rickman wrote:
>>>
>>>> brimdavis@gmail.com wrote on 8/8/2017 8:37 PM:
>>>>> KJ wrote:
>>>>>>
>>>>>> It's even easier than that to synchronously control a standard async
>>>>>> SRAM.
>>>>>> Simply connect WE to the clock and hold OE active all the time except
>>>>>> for cycles where you want to write something new into the SRAM.
>>>>>>
>>>>> As has been explained to you in detail by several other posters, your
>>>>> method is not 'easier' with modern FPGA's and SRAMs.
>>>>>
>>>>> The simplest way to get a high speed clock {gated or not} off the chip,
>>>>> coincident with other registered I/O signals, is to use the dual-edge
>>>>> IOB flip-flops as I suggested.
>>>>>
>>>>> The DDR technique I mentioned would run synchronous single-cycle read
>>>>> or write cycles at 50 MHz on a Spartan-3 Starter kit with an (IIRC) 10
>>>>> ns SRAM, 66 MHz if using a duty-cycle-skewed clock to meet the WE pulse
>>>>> width requirements.
>>>>>
>>>>>  Another advantage of the 'forwarding' method is that one can use the
>>>>>  internal FPGA clock resources for clock multiply/divides etc. without
>>>>>  needing to also manage the board-level low-skew clock distribution
>>>>>  needed by your method.
>>>>
>>>> I can't say I follow what you are proposing.  How do you get the clock
>>>> out of the FPGA with a defined time relationship to the signals clocked
>>>> through the IOB?  Is this done with feedback from the output clock using
>>>> the internal clocking circuits?
>>>
>>>
>>> About a decade back, mainstream FPGAs gained greatly expanded IOB
>>> clocking abilities to support DDR RAM (and other interfaces such as
>>> RGMII).
>>> In particular, one can forward a clock out of an FPGA pin phase aligned
>>> with data on other pins.  You can also use one of the internal PLLs to
>>> generate phase shifted clocks, and thus have a phase shift on the pins
>>> between two data signals or between the clock and the data signals.
>>>
>>> This can be done without needing feedback from the pins.
>>>
>>>
>>> You should try reading a datasheet occasionally - they can be very
>>> informative.
>>> Just in case someone has blocked Google where you are: here's an example:
>>> https://www.xilinx.com/support/documentation/user_guides/ug571-ultrascale-
>>> selectio.pdf
>>
>> Thank you for the link to the 356 page document.  No, I have not
>> researched how every brand of FPGA implements DDR interfaces mostly
>> because I have not designed a DDR memory interface in an FPGA.  I did look
>> at the document and didn't find info on how the timing delays through the
>> IOB might be synchronized with the output clock.
>>
>> So how exactly does the tight alignment of a clock exiting a Xilinx FPGA
>> maintain alignment with data exiting the FPGA over time and differential
>> temperature?  What will the timing relationship be and how tightly can it
>> be maintained?
>>
>> Just waving your hands and saying things can be aligned doesn't explain
>> how it works.  This is a discussion.  If you aren't interested in
>> discussing, then please don't bother to reply.
>>
>
> Thinking about it, YES, FPGAs normally have a few pins that can be
> configured as dedicated clock drivers, and it will generally be guaranteed
> that if those pins are driving out a global clock, then any other pin with
> output clocked by that clock will change so as to have a known hold time
> (over specified operating conditions). This being the way to run a typical
> synchronous interface.
>
> Since this method requires the WE signal to be the clock, you need to find a
> part that has either a write mask signal, or perhaps is multi-ported so this
> port could be dedicated to writes and another port could be used to read
> what is needed (the original part for this thread wouldn't be usable with
> this method).

I'm not sure you read the full thread.  The method for generating the WE 
signal is to use the two DDR FFs to drive a one level during one half of the 
clock and to drive the write signal during the other half of the clock.  I 
misspoke above when I called it a "clock".  The *other* method involved 
using the actual clock as WE and gating it with the OE signal which won't 
work on all async RAMs.

So with the DDR method *all* of the signals will exit the chip with a 
nominal zero timing delay relative to each other.  This is literally the 
edge of the async RAM spec.  So you need to have some delays on the other 
signals relative to the WE to allow for variation in timing of individual 
outputs.  It seems the method suggested is to drive the CS and WE signals 
hard and lighten the drive on the other outputs.

This is a method that is not relying on any guaranteed spec from the FPGA 
maker.  This method uses trace capacitance to create delta t = delta v * c / 
i to speed or slow the rising edge of the various outputs.  This relies on 
over compensating the FPGA spec by means that depend on details of the board 
layout.  It reminds me of the early days of generating timing signals for 
DRAM with logic delays.

Yeah, you might get it to work, but the layout will need to be treated with 
care and respect even more so than an impedance controlled trace.  It will 
need to be characterized over temperature and voltage and you will have to 
design in enough margin to allow for process variations.

-- 

Rick C

Article: 160232
Subject: Re: SystemVerilog and alternatives
From: Jan Coombs <jenfhaomndgfwutc@murmic.plus.com>
Date: Fri, 11 Aug 2017 12:33:47 +0100
Links: << >>  << T >>  << A >>
On Tue, 8 Aug 2017 04:00:59 -0700 (PDT)
tullio <tullio.grassi@gmail.com> wrote:

> Hello,
> 
>  i am an experienced FPGA designer, having used Verilog for
> long time. For a mixed analog-digital project involving an
> ASIC and (maybe) an FPGA, i need to get ready for extensive
> verification and test-vector generation. The mainstream tools
> seem to be SystemVerilog and UVM, which seem to have a
> difficult learning curve and also difficult maintenance. But
> somebody suggested me to consider using Verilog and Python,
> having the advantage that they complement each other very
> nicely, and that Python is easy to learn.

Could that last point be simplified by building your simulation 
model and tests in python using the MyHDL library?  From this
you can then export Verilog or VHDL for synthesis. 

Jan Coombs
-- 
http://myhdl.org/


Article: 160233
Subject: Re: sram
From: Allan Herriman <allanherriman@hotmail.com>
Date: 11 Aug 2017 12:41:39 GMT
Links: << >>  << T >>  << A >>
On Thu, 10 Aug 2017 16:46:13 -0700, brimdavis wrote:

> rickman wrote:
>>
>> I can't say I follow what you are proposing.  How do you get the clock
>> out of the FPGA with a defined time relationship to the signals clocked
>> through the IOB?
>>
>>
> The links I gave in my original post explain the technique:
>>>
>>> I posted some notes on this technique (for a Spartan-3) to the
>>> fpga-cpu group >> many years ago:
>>>   https://groups.yahoo.com/neo/groups/fpga-cpu/conversations/
messages/2076
>>>   https://groups.yahoo.com/neo/groups/fpga-cpu/conversations/
messages/2177
>>>
>>>
> Allan Herriman wrote:
>> 
>> About a decade back, mainstream FPGAs gained greatly expanded IOB
>> clocking abilities to support DDR RAM (and other interfaces such as
>> RGMII).
>>
>>
> Nearly twenty years now!
> 
> Xilinx parts had ODDR equivalents in Virtex-E using hard macros; then
> the actual ODDR primitive stuff appeared in Virtex-2.


Nearly twenty years!  Doesn't time fly when you're having fun.

Thinking back, the last time I connected an async SRAM to an FPGA was in 
1997, using a Xilinx 5200 series device.

The 5200 was a low cost family, a bit like the XC4000 series, but with 
even worse routing resources, and (keeping it on-topic for this thread) 
NO IOB FF.  Yes, that's right, to get repeatable IO timing, one had to LOC 
a fabric FF near the pin and do manual routing from that FF to the pin.  
(The manual routing could be saved as a string in a constraints file, 
IIRC).

Still, I managed to meet all the SRAM timing requirements, but only by 
using two clocks for each RAM read or write.  The write strobe used a 
negative edge triggered FF.


"And if you tell that to the young people today, they won't believe you"

Regards,
Allan

Article: 160234
Subject: Re: sram
From: Allan Herriman <allanherriman@hotmail.com>
Date: 11 Aug 2017 13:38:26 GMT
Links: << >>  << T >>  << A >>
On Thu, 10 Aug 2017 22:39:39 -0400, rickman wrote:

> Allan Herriman wrote on 8/10/2017 2:02 AM:
>> On Wed, 09 Aug 2017 22:33:40 -0400, rickman wrote:
>>
>>> brimdavis@gmail.com wrote on 8/8/2017 8:37 PM:
>>>> KJ wrote:
>>>>>
>>>>> It's even easier than that to synchronously control a standard async
>>>>> SRAM.
>>>>> Simply connect WE to the clock and hold OE active all the time
>>>>> except for cycles where you want to write something new into the
>>>>> SRAM.
>>>>>
>>>> As has been explained to you in detail by several other posters, your
>>>> method is not 'easier' with modern FPGA's and SRAMs.
>>>>
>>>> The simplest way to get a high speed clock {gated or not} off the
>>>> chip,
>>>> coincident with other registered I/O signals, is to use the dual-edge
>>>> IOB flip-flops as I suggested.
>>>>
>>>> The DDR technique I mentioned would run synchronous single-cycle read
>>>> or write cycles at 50 MHz on a Spartan-3 Starter kit with an (IIRC)
>>>> 10 ns SRAM, 66 MHz if using a duty-cycle-skewed clock to meet the WE
>>>> pulse width requirements.
>>>>
>>>>  Another advantage of the 'forwarding' method is that one can use the
>>>>  internal FPGA clock resources for clock multiply/divides etc.
>>>>  without needing to also manage the board-level low-skew clock
>>>>  distribution needed by your method.
>>>
>>> I can't say I follow what you are proposing.  How do you get the clock
>>> out of the FPGA with a defined time relationship to the signals
>>> clocked through the IOB?  Is this done with feedback from the output
>>> clock using the internal clocking circuits?
>>
>>
>> About a decade back, mainstream FPGAs gained greatly expanded IOB
>> clocking abilities to support DDR RAM (and other interfaces such as
>> RGMII).
>> In particular, one can forward a clock out of an FPGA pin phase aligned
>> with data on other pins.  You can also use one of the internal PLLs to
>> generate phase shifted clocks, and thus have a phase shift on the pins
>> between two data signals or between the clock and the data signals.
>>
>> This can be done without needing feedback from the pins.
>>
>>
>> You should try reading a datasheet occasionally - they can be very
>> informative.
>> Just in case someone has blocked Google where you are: here's an
>> example:
>> https://www.xilinx.com/support/documentation/user_guides/ug571-
ultrascale-
>> selectio.pdf
> 
> Thank you for the link to the 356 page document.  No, I have not
> researched how every brand of FPGA implements DDR interfaces mostly
> because I have not designed a DDR memory interface in an FPGA.  I did
> look at the document and didn't find info on how the timing delays
> through the IOB might be synchronized with the output clock.
> 
> So how exactly does the tight alignment of a clock exiting a Xilinx FPGA
> maintain alignment with data exiting the FPGA over time and differential
> temperature?  What will the timing relationship be and how tightly can
> it be maintained?
> 
> Just waving your hands and saying things can be aligned doesn't explain
> how it works.  This is a discussion.  If you aren't interested in
> discussing, then please don't bother to reply.

As you say you've never done DDR I'll give a simple explanation here, 
using Xilinx primitives as an example.

The clock forwarding is not the same as connecting an internal clock net 
to an output pin.  Instead, it is output through an ODDR, in exactly the 
same way that the DDR output data is produced.  (Except in this case, 
instead of outputting two data phases, D1 and D2, it just outputs two 
constants, '1' and '0' (or '0' and '1' if you want the opposite phase) to 
produce a square wave.

The clock-forwarding output and the data output ODDR blocks are all 
clocked from the same clock on a low skew internal clock net.  This will 
typically have some tens of ps (to hundreds of ps, depending on the 
particular clocking resource) skew.  There will also be skew due to the 
different trace lengths for each signal in the BGA interposer, but these 
are known and can be compensated for in the PCB design.

Perhaps you want deliberate skew between the clock and data (e.g. for 
RGMII) - there are two ways of doing that:
1.  Use an ODELAY block on (a subset of) the outputs,  ODELAY sits 
between the ODDR output and the input of the OBUF pin driver.  The ODELAY 
is calibrated by a reference clock, and thus is stable against PVT.  It 
has a delay programmable between ~0 and a few ns.
It has an accuracy of some tens of ps, and produces some tens of ps jitter 
on the signal passing through it.

2.  Use a PLL (or MMCM) to produce deliberately skewed system clocks 
inside the FPGA.  These will need separate clocking resources to get to 
the IO blocks (leading to some hundreds of ps of additional, unknown 
skew).

More details can be found in the user guide that I linked earlier.

Allan

Article: 160235
Subject: Re: sram
From: Brian Davis <brimdavis@gmail.com>
Date: Fri, 11 Aug 2017 18:48:39 -0700 (PDT)
Links: << >>  << T >>  << A >>
rickman wrote:

> I haven't used a Xilinx part in at something like 15 years.

Then maybe you shouldn't post comments like this:

> This is a method that is not relying on any guaranteed spec=20
> from the FPGA maker. This method uses trace capacitance to=20
> create delta t =3D delta v * c /i to speed or slow the rising=20
> edge of the various outputs.=20

 Xilinx characterizes and publishes I/O buffer switching parameters vs. IOS=
TANDARD/SLEW/DRIVE settings; this information is both summarized in the dat=
asheet and used in generating the timing reports, providing the base delay =
of the I/O buffer independent of any external capacitive loading [1].

 The I/O drive values I used in my S3 testing provided an I/O buffer delay =
difference of about 1 ns (at the fast device corner) between WE and the add=
ress/data lines.

 While these I/O pins will be slowed further by any board level loading, fo=
r any reasonable board layout it is improbable that this loading will someh=
ow reverse the WE timing relationship and violate the zero-ns hold requirem=
ent.

My original 2004 posts clearly specified what was (timing at FPGA pins) and=
 wasn't (board level signal integrity issues) covered in my example:
>>
>> - board level timing hasn't been looked at ( note that S3
>> timing reports don't include output buffer loading )
>>

  For purposes of a demo example design, I'm perfectly happy with an addres=
s/data hold of 10% of the SRAM minimum cycle time, given that the SRAM hold=
 specification is zero ns.

  If a design needs more precise control, many of the newer parts have cali=
brated I/O delays (already mentioned by Allan) that can be used to produce =
known time delays; in the older S3 family, the easiest way to provide an ad=
justable time delay would be to use a DCM to phase shift the clock to the O=
FDDRRSE flip-flop primitive driving WE.


-Brian


[1] UG199 S3 data sheet v3.1
    https://www.xilinx.com/support/documentation/data_sheets/ds099.pdf
    page 83:
    "
    " The Output timing for all standards, as published in the speed files
    " and the data sheet, is always based on a CL value of zero.
    "



Article: 160236
Subject: Re: sram
From: Richard Damon <Richard@Damon-Family.org>
Date: Sat, 12 Aug 2017 12:18:52 -0400
Links: << >>  << T >>  << A >>
On 8/11/17 3:04 AM, rickman wrote:
> Richard Damon wrote on 8/11/2017 12:09 AM:
>> On 8/10/17 10:39 PM, rickman wrote:
>>> Allan Herriman wrote on 8/10/2017 2:02 AM:
>>>> On Wed, 09 Aug 2017 22:33:40 -0400, rickman wrote:
>>>>
>>>>> brimdavis@gmail.com wrote on 8/8/2017 8:37 PM:
>>>>>> KJ wrote:
>>>>>>>
>>>>>>> It's even easier than that to synchronously control a standard async
>>>>>>> SRAM.
>>>>>>> Simply connect WE to the clock and hold OE active all the time 
>>>>>>> except
>>>>>>> for cycles where you want to write something new into the SRAM.
>>>>>>>
>>>>>> As has been explained to you in detail by several other posters, your
>>>>>> method is not 'easier' with modern FPGA's and SRAMs.
>>>>>>
>>>>>> The simplest way to get a high speed clock {gated or not} off the 
>>>>>> chip,
>>>>>> coincident with other registered I/O signals, is to use the dual-edge
>>>>>> IOB flip-flops as I suggested.
>>>>>>
>>>>>> The DDR technique I mentioned would run synchronous single-cycle read
>>>>>> or write cycles at 50 MHz on a Spartan-3 Starter kit with an 
>>>>>> (IIRC) 10
>>>>>> ns SRAM, 66 MHz if using a duty-cycle-skewed clock to meet the WE 
>>>>>> pulse
>>>>>> width requirements.
>>>>>>
>>>>>>  Another advantage of the 'forwarding' method is that one can use the
>>>>>>  internal FPGA clock resources for clock multiply/divides etc. 
>>>>>> without
>>>>>>  needing to also manage the board-level low-skew clock distribution
>>>>>>  needed by your method.
>>>>>
>>>>> I can't say I follow what you are proposing.  How do you get the clock
>>>>> out of the FPGA with a defined time relationship to the signals 
>>>>> clocked
>>>>> through the IOB?  Is this done with feedback from the output clock 
>>>>> using
>>>>> the internal clocking circuits?
>>>>
>>>>
>>>> About a decade back, mainstream FPGAs gained greatly expanded IOB
>>>> clocking abilities to support DDR RAM (and other interfaces such as
>>>> RGMII).
>>>> In particular, one can forward a clock out of an FPGA pin phase aligned
>>>> with data on other pins.  You can also use one of the internal PLLs to
>>>> generate phase shifted clocks, and thus have a phase shift on the pins
>>>> between two data signals or between the clock and the data signals.
>>>>
>>>> This can be done without needing feedback from the pins.
>>>>
>>>>
>>>> You should try reading a datasheet occasionally - they can be very
>>>> informative.
>>>> Just in case someone has blocked Google where you are: here's an 
>>>> example:
>>>> https://www.xilinx.com/support/documentation/user_guides/ug571-ultrascale- 
>>>>
>>>> selectio.pdf
>>>
>>> Thank you for the link to the 356 page document.  No, I have not
>>> researched how every brand of FPGA implements DDR interfaces mostly
>>> because I have not designed a DDR memory interface in an FPGA.  I did 
>>> look
>>> at the document and didn't find info on how the timing delays through 
>>> the
>>> IOB might be synchronized with the output clock.
>>>
>>> So how exactly does the tight alignment of a clock exiting a Xilinx FPGA
>>> maintain alignment with data exiting the FPGA over time and differential
>>> temperature?  What will the timing relationship be and how tightly 
>>> can it
>>> be maintained?
>>>
>>> Just waving your hands and saying things can be aligned doesn't explain
>>> how it works.  This is a discussion.  If you aren't interested in
>>> discussing, then please don't bother to reply.
>>>
>>
>> Thinking about it, YES, FPGAs normally have a few pins that can be
>> configured as dedicated clock drivers, and it will generally be 
>> guaranteed
>> that if those pins are driving out a global clock, then any other pin 
>> with
>> output clocked by that clock will change so as to have a known hold time
>> (over specified operating conditions). This being the way to run a 
>> typical
>> synchronous interface.
>>
>> Since this method requires the WE signal to be the clock, you need to 
>> find a
>> part that has either a write mask signal, or perhaps is multi-ported 
>> so this
>> port could be dedicated to writes and another port could be used to read
>> what is needed (the original part for this thread wouldn't be usable with
>> this method).
> 
> I'm not sure you read the full thread.  The method for generating the WE 
> signal is to use the two DDR FFs to drive a one level during one half of 
> the clock and to drive the write signal during the other half of the 
> clock.  I misspoke above when I called it a "clock".  The *other* method 
> involved using the actual clock as WE and gating it with the OE signal 
> which won't work on all async RAMs.
> 
> So with the DDR method *all* of the signals will exit the chip with a 
> nominal zero timing delay relative to each other.  This is literally the 
> edge of the async RAM spec.  So you need to have some delays on the 
> other signals relative to the WE to allow for variation in timing of 
> individual outputs.  It seems the method suggested is to drive the CS 
> and WE signals hard and lighten the drive on the other outputs.
> 
> This is a method that is not relying on any guaranteed spec from the 
> FPGA maker.  This method uses trace capacitance to create delta t = 
> delta v * c / i to speed or slow the rising edge of the various 
> outputs.  This relies on over compensating the FPGA spec by means that 
> depend on details of the board layout.  It reminds me of the early days 
> of generating timing signals for DRAM with logic delays.
> 
> Yeah, you might get it to work, but the layout will need to be treated 
> with care and respect even more so than an impedance controlled trace.  
> It will need to be characterized over temperature and voltage and you 
> will have to design in enough margin to allow for process variations.
> 

Driving them all with DDR signals isn't putting you at the edge of the 
spec, but only at the edge of the spec assuming everything is matched 
nominal no-skew. Since we KNOW that output matching is not perfect, and 
we don't have a manufacture guaranteed bias (a guarantee that if any 
output if faster, it will be this one), we are starting outside the 
guaranteed specs. Yes, you can pull some tricks to try and get back into 
spec and establish a bit of margin, but this puts the design over into 
the realms of 'black arts', and best avoided if possible.

Article: 160237
Subject: Suggestion on methodology/ways to test my internal logic analyzer
From: promach <feiphung27@gmail.com>
Date: Mon, 14 Aug 2017 05:33:34 -0700 (PDT)
Links: << >>  << T >>  << A >>
Using http://zipcpu.com/blog/2017/06/08/simple-scope.html , I have rewritten my internal logic analyzer at https://github.com/promach/internal_logic_analyzer

I want to know if my softcore logic scope works. What inputs, into that module, and outputs from it will prove to me that my scope module works?

Article: 160238
Subject: Re: sram
From: Brian Davis <brimdavis@gmail.com>
Date: Mon, 14 Aug 2017 15:42:58 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, August 12, 2017 at 12:18:59 PM UTC-4, Richard Damon wrote:
>
> Since we KNOW that output matching is not perfect
>

Both you and rickman seem to be missing the entire point
of my original post; i.e. you wrote earlier:
>
> 4) Discrete Pulse generation logic, have logic on 
> the board with delay lines to generate the write pulse
> so that WE will pulse low shortly after the address 
> is stable, and comes back high shortly before the
> address might change again.
>
The built in dual-edge I/O logic on many FPGAs provides 
EXACTLY this capability, but with much better PVT tracking.

Although my ancient Spartan-3 example code didn't 
explicitly adjust the edge delay with either a DCM/PLL 
or IOB delay element (although I did mention DCM duty cycle
tweaks), this is very straightforward to do in many recent
FPGA families.

>
> Yes, you can pull some tricks to try and get 
> back into spec and establish a bit of margin
> but this puts the design over into the realms
> of 'black arts'
>
Relying on characterized minimums to meet hold times
is not a 'black art'; it is the underlying reason why 
synchronous digital logic works at all.

 I.e. connecting two sections of a 74LS74 in series
at the board level requires that Tplh/Tphl (min) of 
the first flip-flop be greater than the hold time of 
the next. (And yes, I realize that some logic technologies
require dummy routing or buffers in the datapath to avoid
hold time violations.)

 Particularly with a vendor evaluation board like I used
for that 2004 Spartan-3 Starter Kit SRAM example, it is 
far more likely that signal integrity problems with the 
WE line, rather than buffer delay behavior, causes problems.

-Brian

Article: 160239
Subject: Re: sram
From: Richard Damon <Richard@Damon-Family.org>
Date: Mon, 14 Aug 2017 22:28:01 -0400
Links: << >>  << T >>  << A >>
On 8/14/17 6:42 PM, Brian Davis wrote:
> On Saturday, August 12, 2017 at 12:18:59 PM UTC-4, Richard Damon wrote:
>>
>> Since we KNOW that output matching is not perfect
>>
> 
> Both you and rickman seem to be missing the entire point
> of my original post; i.e. you wrote earlier:
>>
>> 4) Discrete Pulse generation logic, have logic on
>> the board with delay lines to generate the write pulse
>> so that WE will pulse low shortly after the address
>> is stable, and comes back high shortly before the
>> address might change again.
>>
> The built in dual-edge I/O logic on many FPGAs provides
> EXACTLY this capability, but with much better PVT tracking.
> 
> Although my ancient Spartan-3 example code didn't
> explicitly adjust the edge delay with either a DCM/PLL
> or IOB delay element (although I did mention DCM duty cycle
> tweaks), this is very straightforward to do in many recent
> FPGA families.
> 
>>
>> Yes, you can pull some tricks to try and get
>> back into spec and establish a bit of margin
>> but this puts the design over into the realms
>> of 'black arts'
>>
> Relying on characterized minimums to meet hold times
> is not a 'black art'; it is the underlying reason why
> synchronous digital logic works at all.
> 
>   I.e. connecting two sections of a 74LS74 in series
> at the board level requires that Tplh/Tphl (min) of
> the first flip-flop be greater than the hold time of
> the next. (And yes, I realize that some logic technologies
> require dummy routing or buffers in the datapath to avoid
> hold time violations.)
> 
>   Particularly with a vendor evaluation board like I used
> for that 2004 Spartan-3 Starter Kit SRAM example, it is
> far more likely that signal integrity problems with the
> WE line, rather than buffer delay behavior, causes problems.
> 
> -Brian
> 

The 'Black Art' I was referring to was NOT datasheet min/maxs but using 
strong/weak drive and bus loading to convert timing that doesn't meet 
guaranteed performance (since given two outputs, there will be some 
skew, and absent some unusual specification that pin x will always be 
faster than pin y, we need to allow that x might be slower than y) into 
something that likely 'works'.

Article: 160240
Subject: Re: sram
From: Brian Davis <brimdavis@gmail.com>
Date: Tue, 15 Aug 2017 14:59:11 -0700 (PDT)
Links: << >>  << T >>  << A >>
Richard Damon wrote:
> 
> The 'Black Art' I was referring to was NOT datasheet min/maxs
> but using strong/weak drive and bus loading to convert timing
> that doesn't meet guaranteed performance
>
As I explained to rickman several posts ago, the Spartan-3 
I/O buffer base delay vs. IOSTANDARD/SLEW/DRIVE is fully 
characterized and published by Xilinx:

>>
>> Xilinx characterizes and publishes I/O buffer switching
>> parameters vs. IOSTANDARD/SLEW/DRIVE settings; this 
>> information is both summarized in the datasheet and used
>> in generating the timing reports, providing the base delay
>> of the I/O buffer independent of any external capacitive loading.
>>

If you don't want to rely on a guaranteed minimum delay
between I/O buffer types at the FAST device corner, that's
fine with me, but please stop with the baseless criticism.

-Brian

Article: 160241
Subject: Microsemi SmartFusion2 Field Upgrade
From: Rob Gaddi <rgaddi@highlandtechnology.invalid>
Date: Wed, 16 Aug 2017 11:16:01 -0700
Links: << >>  << T >>  << A >>
We're starting a new design, and I again find myself tempted by the 
Microsemi SmartFusion2 as combination FPGA/uC.  It's got a built-in ARM 
Cortex-M3, which is a simple dinky micro instead of some big honking A8 
application processor that you can't even get up and running without 
kilobytes of boot code.  The smallest, cheapest one is about $15 in 
small quantity with 64 kB of data memory and and 128 kB of application 
flash without having to touch the fabric resource.  So, cute chip.

One of my concerns is field upgradability.  There are a couple app notes 
on implementing "Auto-Update Programming Recovery", which seems to be 
what I'm looking for.  You put down an external flash that holds your 
"Golden Image" of what you shipped with, and then an "Upgrade Image" and 
if the upgrade gets its wires crossed then it falls back.  Or that's the 
theory, anyway.

Anyone have any experience with these devices to share for good or ill? 
Especially experience with the field upgrade mechanism.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Article: 160242
Subject: Microsemi FPGAs
From: John Larkin <jjlarkin@highlandtechnology.com>
Date: Wed, 16 Aug 2017 21:20:31 -0700
Links: << >>  << T >>  << A >>
Has anyone used the Microsemi SOCs, the SmartFusion2 FPGAs with an ARM
Cortex M3 on chip?

How good/awful is the tool set? Any big likes or dislikes?

They look like a pretty good deal for a medium FPGA with ARM. 


-- 

John Larkin         Highland Technology, Inc

lunatic fringe electronics 


Article: 160243
Subject: Re: Microsemi SmartFusion2 Field Upgrade
From: rickman <gnuarm@gmail.com>
Date: Thu, 17 Aug 2017 02:49:59 -0400
Links: << >>  << T >>  << A >>
Rob Gaddi wrote on 8/16/2017 2:16 PM:
> We're starting a new design, and I again find myself tempted by the
> Microsemi SmartFusion2 as combination FPGA/uC.  It's got a built-in ARM
> Cortex-M3, which is a simple dinky micro instead of some big honking A8
> application processor that you can't even get up and running without
> kilobytes of boot code.  The smallest, cheapest one is about $15 in small
> quantity with 64 kB of data memory and and 128 kB of application flash
> without having to touch the fabric resource.  So, cute chip.
>
> One of my concerns is field upgradability.  There are a couple app notes on
> implementing "Auto-Update Programming Recovery", which seems to be what I'm
> looking for.  You put down an external flash that holds your "Golden Image"
> of what you shipped with, and then an "Upgrade Image" and if the upgrade
> gets its wires crossed then it falls back.  Or that's the theory, anyway.
>
> Anyone have any experience with these devices to share for good or ill?
> Especially experience with the field upgrade mechanism.

I've never looked that hard at the parts.  I like the idea of the MCU type 
processor rather than the cell phone type processor, but when I had a use 
for the part the price was a *lot* higher.

I attended a Microsemi workshop for the chip and they gave us a board (or 
maybe we paid a bit I don't recall).  But we got the version without the 
MCU.  I tried to get them to fork over one with the Smartfusion 2, but I 
guess I didn't register on their radar.

I see JL is asking about them too.  Must be a full moon.

-- 

Rick C

Article: 160244
Subject: Re: Microsemi SmartFusion2 Field Upgrade
From: HT-Lab <hans64@htminuslab.com>
Date: Thu, 17 Aug 2017 09:54:11 +0100
Links: << >>  << T >>  << A >>
On 17/08/2017 07:49, rickman wrote:
> Rob Gaddi wrote on 8/16/2017 2:16 PM:
>> We're starting a new design, and I again find myself tempted by the
>> Microsemi SmartFusion2 as combination FPGA/uC.  It's got a built-in ARM
>> Cortex-M3, which is a simple dinky micro instead of some big honking A8
>> application processor that you can't even get up and running without
>> kilobytes of boot code.  The smallest, cheapest one is about $15 in small
>> quantity with 64 kB of data memory and and 128 kB of application flash
>> without having to touch the fabric resource.  So, cute chip.
>>
>> One of my concerns is field upgradability.  There are a couple app 
>> notes on
>> implementing "Auto-Update Programming Recovery", which seems to be 
>> what I'm
>> looking for.  You put down an external flash that holds your "Golden 
>> Image"
>> of what you shipped with, and then an "Upgrade Image" and if the upgrade
>> gets its wires crossed then it falls back.  Or that's the theory, anyway.
>>
>> Anyone have any experience with these devices to share for good or ill?
>> Especially experience with the field upgrade mechanism.
> 
> I've never looked that hard at the parts.  I like the idea of the MCU 
> type processor rather than the cell phone type processor, but when I had 
> a use for the part the price was a *lot* higher.
> 
> I attended a Microsemi workshop for the chip and they gave us a board 
> (or maybe we paid a bit I don't recall).  

I also attended their Smartfusion workshop and got a really nice 
prototype board with an M2GLO25 for free. They gave us the option of 
either an IGLOO2 and SmartFusion chip. Here is the board:

https://www.microsemi.com/products/fpga-soc/design-resources/dev-kits/smartfusion2/future-creative-board

You might still be able to get it for free or for little money and it is 
ideal for hacking around with. I had no issues getting the board up and 
running using Designer and Mentor's Precision.

I have always liked Actel/MicroSemi as I get the impression they work 
that bit harder to satisfy their customer base.

Good luck,
Hans
www.ht-lab.com


But we got the version without
> the MCU.  I tried to get them to fork over one with the Smartfusion 2, 
> but I guess I didn't register on their radar.
> 
> I see JL is asking about them too.  Must be a full moon.
> 


Article: 160245
Subject: Re: Microsemi SmartFusion2 Field Upgrade
From: rickman <gnuarm@gmail.com>
Date: Thu, 17 Aug 2017 05:01:35 -0400
Links: << >>  << T >>  << A >>
HT-Lab wrote on 8/17/2017 4:54 AM:
> On 17/08/2017 07:49, rickman wrote:
>> Rob Gaddi wrote on 8/16/2017 2:16 PM:
>>> We're starting a new design, and I again find myself tempted by the
>>> Microsemi SmartFusion2 as combination FPGA/uC.  It's got a built-in ARM
>>> Cortex-M3, which is a simple dinky micro instead of some big honking A8
>>> application processor that you can't even get up and running without
>>> kilobytes of boot code.  The smallest, cheapest one is about $15 in small
>>> quantity with 64 kB of data memory and and 128 kB of application flash
>>> without having to touch the fabric resource.  So, cute chip.
>>>
>>> One of my concerns is field upgradability.  There are a couple app notes on
>>> implementing "Auto-Update Programming Recovery", which seems to be what I'm
>>> looking for.  You put down an external flash that holds your "Golden Image"
>>> of what you shipped with, and then an "Upgrade Image" and if the upgrade
>>> gets its wires crossed then it falls back.  Or that's the theory, anyway.
>>>
>>> Anyone have any experience with these devices to share for good or ill?
>>> Especially experience with the field upgrade mechanism.
>>
>> I've never looked that hard at the parts.  I like the idea of the MCU type
>> processor rather than the cell phone type processor, but when I had a use
>> for the part the price was a *lot* higher.
>>
>> I attended a Microsemi workshop for the chip and they gave us a board (or
>> maybe we paid a bit I don't recall).
>
> I also attended their Smartfusion workshop and got a really nice prototype
> board with an M2GLO25 for free. They gave us the option of either an IGLOO2
> and SmartFusion chip. Here is the board:
>
> https://www.microsemi.com/products/fpga-soc/design-resources/dev-kits/smartfusion2/future-creative-board

Yeah, that's the one.  They only had Igloo boards at the workshop and said 
they would get me one with the SmartFusion2.  But it never happened.  I 
emailed the person I was given contact info for but never got the 
replacement board.  I don't really have a particular need now.


> You might still be able to get it for free or for little money and it is
> ideal for hacking around with. I had no issues getting the board up and
> running using Designer and Mentor's Precision.
>
> I have always liked Actel/MicroSemi as I get the impression they work that
> bit harder to satisfy their customer base.

I've been using the Lattice stuff myself, but I don't see a big difference 
in support.  If they don't perceive you as a larger customer you only get so 
much support.

-- 

Rick C

Article: 160246
Subject: Re: Microsemi SmartFusion2 Field Upgrade
From: lasselangwadtchristensen@gmail.com
Date: Thu, 17 Aug 2017 10:16:35 -0700 (PDT)
Links: << >>  << T >>  << A >>
Den torsdag den 17. august 2017 kl. 08.50.03 UTC+2 skrev rickman:
> Rob Gaddi wrote on 8/16/2017 2:16 PM:
> > We're starting a new design, and I again find myself tempted by the
> > Microsemi SmartFusion2 as combination FPGA/uC.  It's got a built-in ARM
> > Cortex-M3, which is a simple dinky micro instead of some big honking A8
> > application processor that you can't even get up and running without
> > kilobytes of boot code.  The smallest, cheapest one is about $15 in small
> > quantity with 64 kB of data memory and and 128 kB of application flash
> > without having to touch the fabric resource.  So, cute chip.
> >
> > One of my concerns is field upgradability.  There are a couple app notes on
> > implementing "Auto-Update Programming Recovery", which seems to be what I'm
> > looking for.  You put down an external flash that holds your "Golden Image"
> > of what you shipped with, and then an "Upgrade Image" and if the upgrade
> > gets its wires crossed then it falls back.  Or that's the theory, anyway.
> >
> > Anyone have any experience with these devices to share for good or ill?
> > Especially experience with the field upgrade mechanism.
> 
> I've never looked that hard at the parts.  I like the idea of the MCU type 
> processor rather than the cell phone type processor, but when I had a use 
> for the part the price was a *lot* higher.
> 
> I attended a Microsemi workshop for the chip and they gave us a board (or 
> maybe we paid a bit I don't recall).  But we got the version without the 
> MCU.  I tried to get them to fork over one with the Smartfusion 2, but I 
> guess I didn't register on their radar.
> 
> I see JL is asking about them too.  Must be a full moon.

you didn't notice Rob's www.highlandtechnology.com signature? ;)


Article: 160247
Subject: Re: Microsemi FPGAs
From: Svenn Are Bjerkem <svenn.bjerkem@gmail.com>
Date: Thu, 17 Aug 2017 15:11:54 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Thursday, August 17, 2017 at 6:20:38 AM UTC+2, John Larkin wrote:
> Has anyone used the Microsemi SOCs, the SmartFusion2 FPGAs with an ARM
> Cortex M3 on chip?
>=20
> How good/awful is the tool set? Any big likes or dislikes?

They use Synplify and Modelsim in a Microsemi Edition for synthesis and sim=
ulation. The tools are called from Libero SoC, their own 'wrapper' design f=
low tool. During the development of our product, the release of Libero SoC =
went from 11.0 to 11.7. Now 11.8 is out, and they are improving from versio=
n to version. For FPGA work both Linux and Windows is supported. For MCU wo=
rk, they are relying on eclipse, but some parts of the programming and debu=
g features from eclipse used to be windows only. For any JTAG programming f=
rom Linux, you have to get the FlashPro5 programming dongle. FlashPro4 was =
not supported for neither FPGA nor MCU programming last time I looked into =
it a year ago.

Design entry is typically done in a block diagram editing tool on toplevel.=
 I could fairly easily integrate my external VHDL code without much hassle =
and that was an advantage as we were moving from Altera to Microsemi and go=
t rid of the schematic capture done in Max+Plus.

The implementation of the FPGA fabric is a matter of clicking buttons in th=
e GUI, but most of the process can be automated with Tcl. In 11.7 there was=
 still one process step which was GUI only. I nagged them about this for so=
me years to get all process steps done by Tcl, but my volume was not large =
enough for them to listen.

We split the FPGA development and the MCU software on two engineers. Their =
version of eclipse called SoftConsole used to be a bit to tightly connected=
 to the data provided by Libero SoC regarding allocating areas in the inter=
nal flash, but the separation into an export of a BSP from FPGA is improvin=
g. The SW guy needed to fire up the FPGA tool from time to time to fix some=
 snags that happened because we shared the design on SVN.

Documentation for SW design when using IP modules from Microsemi is done wi=
th doxygen so it was pretty easy to start writing bare-metal code in the MC=
U for the various IPs that we used. When the FPGA platform was correct rega=
rding memory areas, the SW guy could do his dayily work direct in SoftConso=
le (on windows) with programming of the MCU code, downloading and debugging=
. FPGA fabric can be programmed separately from MCU area.

I found Libero SoC a bit less tedious than Vivado, but I had a learning cur=
ve.=20

>=20
> They look like a pretty good deal for a medium FPGA with ARM.=20

I would agree, and due to the true flash storage of the configuration, ther=
e is no issues regarding how to store your bitfile on the board for a produ=
ct. Just get the JTAG connections right and each programming is persistent.=
 Instant on. I would use the device again on anything which the M3 can hand=
le. Can run uC Linux, but we opted for bare-metal as we wanted to use only =
the internal 256k flash area. I was hoping for a Cortex-A or Cortex-R CPU, =
but they didn't have that on the roadmap.

If you stick to devices which are covered by the silver version of the lice=
nse, you only need to register once a year for the free (as in beer) versio=
n of Libero SoC.

--=20
Svenn

Article: 160248
Subject: Re: Microsemi FPGAs
From: Richard Damon <Richard@Damon-Family.org>
Date: Sat, 19 Aug 2017 09:21:16 -0400
Links: << >>  << T >>  << A >>
On 8/17/17 12:20 AM, John Larkin wrote:
> Has anyone used the Microsemi SOCs, the SmartFusion2 FPGAs with an ARM
> Cortex M3 on chip?
> 
> How good/awful is the tool set? Any big likes or dislikes?
> 
> They look like a pretty good deal for a medium FPGA with ARM.
> 
> 

We have done a couple of projects with them, and the tools haven't been 
bad to work with.

You start with a top level block diagram in which you place a block for 
the processor. There is a 'System Builder' tool that builds and 
configures the processor block and support logic around it (like adding 
in additional peripherals that are provided that aren't hard logic in 
the processor), which becomes a piece of your design, and then you add 
other blocks to represent other parts of your design (built with HDL, 
provided cores, or other block diagrams)). You can also just put down 
the symbol for the core MPU and build the stuff around it yourself if 
you need something a bit non-standard.

The one thing that is a bit frustrating is that block diagrams are 
'auto-routed' and 'auto-placed' (auto-placing mostly on command), and 
the algorithms sometimes seem a bit strange. I find I tend to need to 
lock the major blocks so they don't go to strange places, and 
occasionally wish there was a similar option for the line.

The software side is Eclipse/GCC based and seems to run fairly cleanly. 
The only real issue I have seen is that the FPGA tools generate the 
Board Support Package files in a sub-directory of the FPGA design, and 
you need to copy those over to your Software Project directory as you 
hand them off between the team members.

Article: 160249
Subject: Re: sram
From: Richard Damon <Richard@Damon-Family.org>
Date: Sun, 20 Aug 2017 16:05:41 -0400
Links: << >>  << T >>  << A >>
On 8/15/17 5:59 PM, Brian Davis wrote:
> Richard Damon wrote:
>>
>> The 'Black Art' I was referring to was NOT datasheet min/maxs
>> but using strong/weak drive and bus loading to convert timing
>> that doesn't meet guaranteed performance
>>
> As I explained to rickman several posts ago, the Spartan-3
> I/O buffer base delay vs. IOSTANDARD/SLEW/DRIVE is fully
> characterized and published by Xilinx:

Looking at the datasheet for the Spartan-3 family, I see no minimum 
times published. There is a footnote that minimums can be gotten out of 
the timing analyzer. There are maximums fully specified out, including 
adders for various cases, allowing you to do some of the analysis early 
in the design, but to get minimum timing you need to first get the 
program and a license to use it (license may be free, but you need to 
give them enough information that they can contact you later asking 
about your usage).

Being only available in the program, and not truly "Published" indicates 
to me some significant uncertainty in the numbers. Timing reports out of 
programs like this tend to be disclaimed to be valid only for THAT 
particular design and the particular operating conditions requested. You 
need to run the analysis over as many condition combinations they 
provide and hope (they never seem to promise it) that the important 
factors are at least monotonic between the data points so you can get 
real min/maxs.
> 
>>>
>>> Xilinx characterizes and publishes I/O buffer switching
>>> parameters vs. IOSTANDARD/SLEW/DRIVE settings; this
>>> information is both summarized in the datasheet and used
>>> in generating the timing reports, providing the base delay
>>> of the I/O buffer independent of any external capacitive loading.
>>>
> 
> If you don't want to rely on a guaranteed minimum delay
> between I/O buffer types at the FAST device corner, that's
> fine with me, but please stop with the baseless criticism.
> 
> -Brian
> 




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search