Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 160050

Article: 160050
Subject: Re: Test Driven Design?
From: Tim Wescott <seemywebsite@myfooter.really>
Date: Wed, 17 May 2017 12:48:19 -0500
Links: << >>  << T >>  << A >>
On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:

> On 5/17/2017 1:17 PM, Tim Wescott wrote:
>> On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:
>>
>>> On 5/16/2017 4:21 PM, Tim Wescott wrote:
>>>> Anyone doing any test driven design for FPGA work?
>>>>
>>>> I've gone over to doing it almost universally for C++ development,
>>>> because It Just Works -- you lengthen the time to integration a bit,
>>>> but vastly shorten the actual integration time.
>>>>
>>>> I did a web search and didn't find it mentioned -- the traditional
>>>> "make a test bench" is part way there, but as presented in my
>>>> textbook*
>>>> doesn't impose a comprehensive suite of tests on each module.
>>>>
>>>> So is no one doing it, or does it have another name, or an equivalent
>>>> design process with a different name, or what?
>>>>
>>>> * "The Verilog Hardware Description Language", Thomas & Moorby,
>>>> Kluwer,
>>>> 1998.
>>>
>>> I'm not clear on all of the details of what defines "test driven
>>> design", but I believe I've been using that all along.  I've thought
>>> of this as bottom up development where the lower level code is written
>>> first *and thoroughly tested* before writing the next level of code.
>>>
>>> How does "test driven design" differ from this significantly?
>>
>> The big difference in the software world is that the tests are
>> automated and never retired.  There are generally test suites to make
>> the mechanics of testing easier.  Ideally, whenever you do a build you
>> run the entire unit-test suite fresh.  This means that when you tweak
>> some low-level function, it still gets tested.
>>
>> The other big difference, that's hard for one guy to do, is that if
>> you're going Full Agile you have one guy writing tests and another guy
>> writing "real" code.  Ideally they're equally good, and they switch
>> off. The idea is basically that more brains on the problem is better.
>>
>> If you look at the full description of TDD it looks like it'd be hard,
>> slow, and clunky, because the recommendation is to do things at a very
>> fine-grained level.  However, I've done it, and the process of adding
>> features to a function as you add tests to the bench goes very quickly.
>> The actual development of the bottom layer is a bit slower, but when
>> you go to put the pieces together they just fall into place.
> 
> I guess I'm still not picturing it.  I think the part I don't get is
> "adding features to a function".  To me the features would *be*
> functions that are written, tested and then added to next higher level
> code.  So I assume what you wrote applies to that next higher level.
> 
> I program in two languages, Forth and VHDL.  In Forth functions (called
> "words") are written at *very* low levels, often a word is a single line
> of code and nearly all the time no more than five.  Being very small a
> word is much easier to write although the organization can be tough to
> settle on.
> 
> In VHDL I typically don't decompose the code into such fine grains.  It
> is easy to write the code for the pieces, registers and logic.  The hard
> part is how they interconnect/interrelate.  Fine decomposition tends to
> obscure that rather than enhancing it.  So I write large blocks of code
> to be tested.   I guess in those cases features would be "added" rather
> than new modules being written for the new functionality.
> 
> I still write test benches for each module in VHDL.  Because there is a
> lot more work in writing a using a VHDL test bench than a Forth test
> word this also encourages larger (and fewer) modules.
> 
> Needless to say, I don't find much synergy between the two languages.

Part of what I'm looking for is a reading on whether it makes sense in 
the context of an HDL, and if so, how it makes sense in the context of an 
HDL (I'm using Verilog, because I'm slightly more familiar with it, but 
that's incidental).

In Really Pure TDD for Java, C, or C++, you start by writing a test in 
the absence of a function, just to see the compiler error out.  Then you 
write a function that does nothing.  Then (for instance), you write a 
test who's expected return value is "42", and an accompanying function 
that just returns 42.  Then you elaborate from there.

It sounds really dippy (I was about as skeptical as can be when it was 
presented to me), but in a world where compilation is fast, there's very 
little speed penalty.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Article: 160051
Subject: Re: Configuration fault recovery
From: BobH <wanderingmetalhead.nospam.please@yahoo.com>
Date: Wed, 17 May 2017 10:56:15 -0700
Links: << >>  << T >>  << A >>
On 05/17/2017 08:40 AM, Yannick Lamarre wrote:
> On Tuesday, May 16, 2017 at 5:59:27 PM UTC-4, BobH wrote:
>> On 05/16/2017 01:15 PM, Yannick Lamarre wrote:
>>> Hi all,
>>> I've been thinking about this problem for a while and shared it with a few colleagues, but no one has yet to come up with an answer.
>>> For some configuration, an FPGA can be configured so that two different drivers are connected on that same line internally. A practical example would be two BUFGs driving the same line on a Spartan6.
>>> If those two drivers are driving a different value in a CMOS process, it will connect both rails together on a low impedance line. Obviously, this will cause damages to the chip.
>>
>> I don't think that the tool chain will let you do that. There are
>> several steps that should be able to catch it and error out. This is
>> assuming that you are using a "mature" tool chain.
>>
>> Try manually instantiating two drivers to the same clock line and run it
>> through the tools. It may disconnect one for you or it may just refuse
>> to complete. If it automagically disconnects one for you, it may take
>> some real digging in the log files to find it, but I think it will just
>> error out.
>>
>> BobH
> 
> Hi Bob,
> You are skipping the mental exercise here. What about if some cosmic rays toggle the configuration bits so that the scenario happens? Highly possible in space. This is why there is a market for SEU controllers/monitors and the likes. Now, back to the drawing board.
> 
You are correct, I was assuming it was a design flaw.

To your original question, I suspect that a rail to rail short through a 
couple of FETS would be very hard to detect in in generalized way from 
the current signature. When a large circuit like a major clock 
distribution changes state, you will get a significant current spike, 
probably not unlike what you would see at the beginning of the short 
circuit situation. With the short circuit, that current will persist 
until something craters (unless the drivers had some kind of foldback 
current limiting). That seems like it might be detectable, until you 
consider what would happen if something like a bunch of relatively 
static GPIO signals driving external loads (maybe optocouplers @ 20ma 
each) transition from off to on simultaneously. The destruct current for 
the clock driver is probably less than the normal current signature in 
this case.

You MIGHT be able to make a current signature analysis work in highly 
specific cases, but false tripping would be a serious problem.

The power supply decoupling capacitors are going to make detecting fast 
current spikes difficult externally. You might be able to monitor the 
voltage drop across the power supply bond wires in the package or 
internal distribution system to estimate current flow without adding 
sense resistance as a way to sense current after the decoupling caps.

In a previous job, I worked on hot swap power controllers. These chips 
were supposed to deal with the inrush current of charging the bulk 
capacitance on a board as it switches on, but shut down if the current 
got too high or the inrush persisted too long. The only way we could 
prevent false tripping was to set the thresholds and delays a lot higher 
than you would expect. When they work, they work well. You can short out 
a 100 Amp 12 Volt rail with a pair of pliers, and it will switch off 
before the power supply over-current's and shuts the whole cabinet down.

I think detecting configuration changes would be better done through 
redundant LUTS or some similar method. You might even be able to 
implement that in an existing FPGA via the tool chain. This would not be 
the standard vendor type tool chain, but a specialized one. Developing 
this tool would be a good PHD project for someone.

This is pretty much speculation on my part, and I am not going to claim 
to be an expert on high rel stuff.

Good Luck,
Bob



Article: 160052
Subject: Re: Test Driven Design?
From: BobH <wanderingmetalhead.nospam.please@yahoo.com>
Date: Wed, 17 May 2017 11:05:02 -0700
Links: << >>  << T >>  << A >>
On 05/16/2017 01:21 PM, Tim Wescott wrote:
> Anyone doing any test driven design for FPGA work?
> 
> I've gone over to doing it almost universally for C++ development,
> because It Just Works -- you lengthen the time to integration a bit, but
> vastly shorten the actual integration time.
> 
> I did a web search and didn't find it mentioned -- the traditional "make
> a test bench" is part way there, but as presented in my textbook* doesn't
> impose a comprehensive suite of tests on each module.
> 
> So is no one doing it, or does it have another name, or an equivalent
> design process with a different name, or what?
> 
> * "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
> 1998.
> 

Can you elaborate on "Test Driven Design" please? Is this some 
specialized design methodology, or a standard design methodology with 
extensive module testing, or something else completely?

thanks,
BobH

Article: 160053
Subject: Re: Test Driven Design?
From: Rob Gaddi <rgaddi@highlandtechnology.invalid>
Date: Wed, 17 May 2017 11:29:55 -0700
Links: << >>  << T >>  << A >>
On 05/17/2017 10:48 AM, Tim Wescott wrote:
> On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:
>
>> On 5/17/2017 1:17 PM, Tim Wescott wrote:
>>> On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:
>>>
>>>> On 5/16/2017 4:21 PM, Tim Wescott wrote:
>>>>> Anyone doing any test driven design for FPGA work?
>>>>>
>>>>> I've gone over to doing it almost universally for C++ development,
>>>>> because It Just Works -- you lengthen the time to integration a bit,
>>>>> but vastly shorten the actual integration time.
>>>>>
>>>>> I did a web search and didn't find it mentioned -- the traditional
>>>>> "make a test bench" is part way there, but as presented in my
>>>>> textbook*
>>>>> doesn't impose a comprehensive suite of tests on each module.
>>>>>
>>>>> So is no one doing it, or does it have another name, or an equivalent
>>>>> design process with a different name, or what?
>>>>>
>>>>> * "The Verilog Hardware Description Language", Thomas & Moorby,
>>>>> Kluwer,
>>>>> 1998.
>>>>
>>>> I'm not clear on all of the details of what defines "test driven
>>>> design", but I believe I've been using that all along.  I've thought
>>>> of this as bottom up development where the lower level code is written
>>>> first *and thoroughly tested* before writing the next level of code.
>>>>
>>>> How does "test driven design" differ from this significantly?
>>>
>>> The big difference in the software world is that the tests are
>>> automated and never retired.  There are generally test suites to make
>>> the mechanics of testing easier.  Ideally, whenever you do a build you
>>> run the entire unit-test suite fresh.  This means that when you tweak
>>> some low-level function, it still gets tested.
>>>
>>> The other big difference, that's hard for one guy to do, is that if
>>> you're going Full Agile you have one guy writing tests and another guy
>>> writing "real" code.  Ideally they're equally good, and they switch
>>> off. The idea is basically that more brains on the problem is better.
>>>
>>> If you look at the full description of TDD it looks like it'd be hard,
>>> slow, and clunky, because the recommendation is to do things at a very
>>> fine-grained level.  However, I've done it, and the process of adding
>>> features to a function as you add tests to the bench goes very quickly.
>>> The actual development of the bottom layer is a bit slower, but when
>>> you go to put the pieces together they just fall into place.
>>
>> I guess I'm still not picturing it.  I think the part I don't get is
>> "adding features to a function".  To me the features would *be*
>> functions that are written, tested and then added to next higher level
>> code.  So I assume what you wrote applies to that next higher level.
>>
>> I program in two languages, Forth and VHDL.  In Forth functions (called
>> "words") are written at *very* low levels, often a word is a single line
>> of code and nearly all the time no more than five.  Being very small a
>> word is much easier to write although the organization can be tough to
>> settle on.
>>
>> In VHDL I typically don't decompose the code into such fine grains.  It
>> is easy to write the code for the pieces, registers and logic.  The hard
>> part is how they interconnect/interrelate.  Fine decomposition tends to
>> obscure that rather than enhancing it.  So I write large blocks of code
>> to be tested.   I guess in those cases features would be "added" rather
>> than new modules being written for the new functionality.
>>
>> I still write test benches for each module in VHDL.  Because there is a
>> lot more work in writing a using a VHDL test bench than a Forth test
>> word this also encourages larger (and fewer) modules.
>>
>> Needless to say, I don't find much synergy between the two languages.
>
> Part of what I'm looking for is a reading on whether it makes sense in
> the context of an HDL, and if so, how it makes sense in the context of an
> HDL (I'm using Verilog, because I'm slightly more familiar with it, but
> that's incidental).
>
> In Really Pure TDD for Java, C, or C++, you start by writing a test in
> the absence of a function, just to see the compiler error out.  Then you
> write a function that does nothing.  Then (for instance), you write a
> test who's expected return value is "42", and an accompanying function
> that just returns 42.  Then you elaborate from there.
>
> It sounds really dippy (I was about as skeptical as can be when it was
> presented to me), but in a world where compilation is fast, there's very
> little speed penalty.
>

One project I've seen for this in an HDL context in terms of actual 
full-scale TDD, complete with continuous integration of regression 
tests, etc is VUnit.  It combines HDL stub code with a Python wrapper in 
order to automate the running of lots of little tests, rather than big 
monolithic tests that keel over and die on the first error rather than 
reporting them all out.

To be honest, my personal attempts to use it have been pretty 
unsuccessful; it's non-trivial to get the environment set up and 
working.  But we're using it on the VHDL-2017 IEEE package sources to do 
TDD there and when someone else was willing to get everything configured 
(literally the guy who wrote it) he managed to get it all up and working.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Article: 160054
Subject: Re: Test Driven Design?
From: Tim Wescott <seemywebsite@myfooter.really>
Date: Wed, 17 May 2017 13:33:10 -0500
Links: << >>  << T >>  << A >>
On Wed, 17 May 2017 11:05:02 -0700, BobH wrote:

> On 05/16/2017 01:21 PM, Tim Wescott wrote:
>> Anyone doing any test driven design for FPGA work?
>> 
>> I've gone over to doing it almost universally for C++ development,
>> because It Just Works -- you lengthen the time to integration a bit,
>> but vastly shorten the actual integration time.
>> 
>> I did a web search and didn't find it mentioned -- the traditional
>> "make a test bench" is part way there, but as presented in my textbook*
>> doesn't impose a comprehensive suite of tests on each module.
>> 
>> So is no one doing it, or does it have another name, or an equivalent
>> design process with a different name, or what?
>> 
>> * "The Verilog Hardware Description Language", Thomas & Moorby, Kluwer,
>> 1998.
>> 
>> 
> Can you elaborate on "Test Driven Design" please? Is this some
> specialized design methodology, or a standard design methodology with
> extensive module testing, or something else completely?

It is a specific software design methodology under the Agile development 
umbrella.

There's a Wikipedia article on it, which is probably good (I'm just 
trusting them this time):

https://en.wikipedia.org/wiki/Test-driven_development

It's basically a bit of structure on top of some common-sense 
methodologies (i.e., design from the top down, then code from the bottom 
up, and test the hell out of each bit as you code it).

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Article: 160055
Subject: Re: Test Driven Design?
From: Tim Wescott <seemywebsite@myfooter.really>
Date: Wed, 17 May 2017 13:36:23 -0500
Links: << >>  << T >>  << A >>
On Wed, 17 May 2017 11:29:55 -0700, Rob Gaddi wrote:

> On 05/17/2017 10:48 AM, Tim Wescott wrote:
>> On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:
>>
>>> On 5/17/2017 1:17 PM, Tim Wescott wrote:
>>>> On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:
>>>>
>>>>> On 5/16/2017 4:21 PM, Tim Wescott wrote:
>>>>>> Anyone doing any test driven design for FPGA work?
>>>>>>
>>>>>> I've gone over to doing it almost universally for C++ development,
>>>>>> because It Just Works -- you lengthen the time to integration a
>>>>>> bit, but vastly shorten the actual integration time.
>>>>>>
>>>>>> I did a web search and didn't find it mentioned -- the traditional
>>>>>> "make a test bench" is part way there, but as presented in my
>>>>>> textbook*
>>>>>> doesn't impose a comprehensive suite of tests on each module.
>>>>>>
>>>>>> So is no one doing it, or does it have another name, or an
>>>>>> equivalent design process with a different name, or what?
>>>>>>
>>>>>> * "The Verilog Hardware Description Language", Thomas & Moorby,
>>>>>> Kluwer,
>>>>>> 1998.
>>>>>
>>>>> I'm not clear on all of the details of what defines "test driven
>>>>> design", but I believe I've been using that all along.  I've thought
>>>>> of this as bottom up development where the lower level code is
>>>>> written first *and thoroughly tested* before writing the next level
>>>>> of code.
>>>>>
>>>>> How does "test driven design" differ from this significantly?
>>>>
>>>> The big difference in the software world is that the tests are
>>>> automated and never retired.  There are generally test suites to make
>>>> the mechanics of testing easier.  Ideally, whenever you do a build
>>>> you run the entire unit-test suite fresh.  This means that when you
>>>> tweak some low-level function, it still gets tested.
>>>>
>>>> The other big difference, that's hard for one guy to do, is that if
>>>> you're going Full Agile you have one guy writing tests and another
>>>> guy writing "real" code.  Ideally they're equally good, and they
>>>> switch off. The idea is basically that more brains on the problem is
>>>> better.
>>>>
>>>> If you look at the full description of TDD it looks like it'd be
>>>> hard, slow, and clunky, because the recommendation is to do things at
>>>> a very fine-grained level.  However, I've done it, and the process of
>>>> adding features to a function as you add tests to the bench goes very
>>>> quickly.
>>>> The actual development of the bottom layer is a bit slower, but when
>>>> you go to put the pieces together they just fall into place.
>>>
>>> I guess I'm still not picturing it.  I think the part I don't get is
>>> "adding features to a function".  To me the features would *be*
>>> functions that are written, tested and then added to next higher level
>>> code.  So I assume what you wrote applies to that next higher level.
>>>
>>> I program in two languages, Forth and VHDL.  In Forth functions
>>> (called "words") are written at *very* low levels, often a word is a
>>> single line of code and nearly all the time no more than five.  Being
>>> very small a word is much easier to write although the organization
>>> can be tough to settle on.
>>>
>>> In VHDL I typically don't decompose the code into such fine grains. 
>>> It is easy to write the code for the pieces, registers and logic.  The
>>> hard part is how they interconnect/interrelate.  Fine decomposition
>>> tends to obscure that rather than enhancing it.  So I write large
>>> blocks of code to be tested.   I guess in those cases features would
>>> be "added" rather than new modules being written for the new
>>> functionality.
>>>
>>> I still write test benches for each module in VHDL.  Because there is
>>> a lot more work in writing a using a VHDL test bench than a Forth test
>>> word this also encourages larger (and fewer) modules.
>>>
>>> Needless to say, I don't find much synergy between the two languages.
>>
>> Part of what I'm looking for is a reading on whether it makes sense in
>> the context of an HDL, and if so, how it makes sense in the context of
>> an HDL (I'm using Verilog, because I'm slightly more familiar with it,
>> but that's incidental).
>>
>> In Really Pure TDD for Java, C, or C++, you start by writing a test in
>> the absence of a function, just to see the compiler error out.  Then
>> you write a function that does nothing.  Then (for instance), you write
>> a test who's expected return value is "42", and an accompanying
>> function that just returns 42.  Then you elaborate from there.
>>
>> It sounds really dippy (I was about as skeptical as can be when it was
>> presented to me), but in a world where compilation is fast, there's
>> very little speed penalty.
>>
>>
> One project I've seen for this in an HDL context in terms of actual
> full-scale TDD, complete with continuous integration of regression
> tests, etc is VUnit.  It combines HDL stub code with a Python wrapper in
> order to automate the running of lots of little tests, rather than big
> monolithic tests that keel over and die on the first error rather than
> reporting them all out.
> 
> To be honest, my personal attempts to use it have been pretty
> unsuccessful; it's non-trivial to get the environment set up and
> working.  But we're using it on the VHDL-2017 IEEE package sources to do
> TDD there and when someone else was willing to get everything configured
> (literally the guy who wrote it) he managed to get it all up and
> working.

Yup.  And when the guy who wrote the software is setting it up and 
configuring it, you just KNOW that it's got to be easy for ordinary 
mortals.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Article: 160056
Subject: Re: Test Driven Design?
From: rickman <gnuarm@gmail.com>
Date: Wed, 17 May 2017 15:58:11 -0400
Links: << >>  << T >>  << A >>
On 5/17/2017 1:48 PM, Tim Wescott wrote:
> On Wed, 17 May 2017 13:39:55 -0400, rickman wrote:
>
>> On 5/17/2017 1:17 PM, Tim Wescott wrote:
>>> On Wed, 17 May 2017 11:47:10 -0400, rickman wrote:
>>>
>>>> On 5/16/2017 4:21 PM, Tim Wescott wrote:
>>>>> Anyone doing any test driven design for FPGA work?
>>>>>
>>>>> I've gone over to doing it almost universally for C++ development,
>>>>> because It Just Works -- you lengthen the time to integration a bit,
>>>>> but vastly shorten the actual integration time.
>>>>>
>>>>> I did a web search and didn't find it mentioned -- the traditional
>>>>> "make a test bench" is part way there, but as presented in my
>>>>> textbook*
>>>>> doesn't impose a comprehensive suite of tests on each module.
>>>>>
>>>>> So is no one doing it, or does it have another name, or an equivalent
>>>>> design process with a different name, or what?
>>>>>
>>>>> * "The Verilog Hardware Description Language", Thomas & Moorby,
>>>>> Kluwer,
>>>>> 1998.
>>>>
>>>> I'm not clear on all of the details of what defines "test driven
>>>> design", but I believe I've been using that all along.  I've thought
>>>> of this as bottom up development where the lower level code is written
>>>> first *and thoroughly tested* before writing the next level of code.
>>>>
>>>> How does "test driven design" differ from this significantly?
>>>
>>> The big difference in the software world is that the tests are
>>> automated and never retired.  There are generally test suites to make
>>> the mechanics of testing easier.  Ideally, whenever you do a build you
>>> run the entire unit-test suite fresh.  This means that when you tweak
>>> some low-level function, it still gets tested.
>>>
>>> The other big difference, that's hard for one guy to do, is that if
>>> you're going Full Agile you have one guy writing tests and another guy
>>> writing "real" code.  Ideally they're equally good, and they switch
>>> off. The idea is basically that more brains on the problem is better.
>>>
>>> If you look at the full description of TDD it looks like it'd be hard,
>>> slow, and clunky, because the recommendation is to do things at a very
>>> fine-grained level.  However, I've done it, and the process of adding
>>> features to a function as you add tests to the bench goes very quickly.
>>> The actual development of the bottom layer is a bit slower, but when
>>> you go to put the pieces together they just fall into place.
>>
>> I guess I'm still not picturing it.  I think the part I don't get is
>> "adding features to a function".  To me the features would *be*
>> functions that are written, tested and then added to next higher level
>> code.  So I assume what you wrote applies to that next higher level.
>>
>> I program in two languages, Forth and VHDL.  In Forth functions (called
>> "words") are written at *very* low levels, often a word is a single line
>> of code and nearly all the time no more than five.  Being very small a
>> word is much easier to write although the organization can be tough to
>> settle on.
>>
>> In VHDL I typically don't decompose the code into such fine grains.  It
>> is easy to write the code for the pieces, registers and logic.  The hard
>> part is how they interconnect/interrelate.  Fine decomposition tends to
>> obscure that rather than enhancing it.  So I write large blocks of code
>> to be tested.   I guess in those cases features would be "added" rather
>> than new modules being written for the new functionality.
>>
>> I still write test benches for each module in VHDL.  Because there is a
>> lot more work in writing a using a VHDL test bench than a Forth test
>> word this also encourages larger (and fewer) modules.
>>
>> Needless to say, I don't find much synergy between the two languages.
>
> Part of what I'm looking for is a reading on whether it makes sense in
> the context of an HDL, and if so, how it makes sense in the context of an
> HDL (I'm using Verilog, because I'm slightly more familiar with it, but
> that's incidental).
>
> In Really Pure TDD for Java, C, or C++, you start by writing a test in
> the absence of a function, just to see the compiler error out.  Then you
> write a function that does nothing.  Then (for instance), you write a
> test who's expected return value is "42", and an accompanying function
> that just returns 42.  Then you elaborate from there.
>
> It sounds really dippy (I was about as skeptical as can be when it was
> presented to me), but in a world where compilation is fast, there's very
> little speed penalty.

I can't think of anything about HDL (VHDL in my case as I am not nearly 
as familiar with Verilog) that would be a hindrance for this.  The 
verification would be done in a simulator.  The only issue I find is 
that simulators typically require you to set up a new workspace/project 
for every separate simulation with a whole sub-directory tree below it. 
Sometimes they want to put the source in one of the branches rather than 
moving all the temporary files out of the way.  So one evolving 
simulation would be easier than many module simulations.

-- 

Rick C

Article: 160057
Subject: Spartan 6 Digital controlled oscillator
From: john tra <jontravel24816@gmail.com>
Date: Wed, 17 May 2017 13:42:34 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hello,

        What is the best way to implement a 30 MHz clock generation circuit=
 that can be dynamically controlled to provide fine frequency offsets in a =
Spartan 6, the clock is to be used internally and output via a pin? Would a=
 DCM provide the functionality and what would the minimum frequency increme=
nt be?=20

Thanks
John

Article: 160058
Subject: Re: Spartan 6 Digital controlled oscillator
From: Gabor <nospam@nospam.com>
Date: Wed, 17 May 2017 16:52:19 -0400
Links: << >>  << T >>  << A >>
On Wednesday, 5/17/2017 4:42 PM, john tra wrote:
> Hello,
> 
>          What is the best way to implement a 30 MHz clock generation circuit that can be dynamically controlled to provide fine frequency offsets in a Spartan 6, the clock is to be used internally and output via a pin? Would a DCM provide the functionality and what would the minimum frequency increment be?
> 
> Thanks
> John
> 

Spartan 6 DCMs are not good for this.  I don't remember if they have a 
dynamic reconfiguration port, but even so there is no fractional divide 
capability, so you're stuck with simple (small) integer ratios of the 
input clock frequency.  Not only that, reconfiguration (even dynamic) 
requires stopping the clock for some period and allowing re-lock.

In 7-series parts, including Artix-7, the MMCM provides a fine phase 
shift that wraps back to zero rather than capping out at some max angle. 
  It can be used to vary the output frequency over a small range without 
reprogramming the multiplier/divider of the frequency generator.

Here's a thread on the Xilinx forums going over the details of this 
approach:

https://forums.xilinx.com/t5/7-Series-FPGAs/MMCM-with-smoothly-varying-output-frequency/td-p/678630


-- 
Gabor

Article: 160059
Subject: Re: Test Driven Design?
From: Theo Markettos <theom+news@chiark.greenend.org.uk>
Date: 17 May 2017 22:17:02 +0100 (BST)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:
> I can't think of anything about HDL (VHDL in my case as I am not nearly 
> as familiar with Verilog) that would be a hindrance for this.  The 
> verification would be done in a simulator.

I think one awkwardness with Verilog (and I think VHDL) is the nature of an
'output'.  To 'output' 42 from a module typically requires handshaking
signals, which you have to test at the same time as the data.  Getting the
data right but the handshaking wrong is a serious bug.  What would be a
simple test in software suddenly requires pattern matching a state machine
(and maybe fuzzing its inputs).

In C the control flow is always right - your module always returns 42, it
never returns 42,42,42 or X,42,X.  HDLs like Bluespec decouple the semantic
content and take care of the control flow, but in Verilog you have to do all
of it (and test it) by hand.

Theo

Article: 160060
Subject: Re: Configuration fault recovery
From: Theo Markettos <theom+news@chiark.greenend.org.uk>
Date: 17 May 2017 23:42:27 +0100 (BST)
Links: << >>  << T >>  << A >>
Yannick Lamarre <yan.lamarre@gmail.com> wrote:
> If those two drivers are driving a different value in a CMOS process, it will connect both rails together on a low impedance line. Obviously, this will cause damages to the chip.
> Now the question is: How long can it stay in this state before it breaks?
> An easier starter question: What is likely to break first and how?

Assuming you managed to defeat all the protections and turn on both
transistors, I don't think it will be that bad.
The transistors are sized such that they can achieve a suitable slew on the
capacitance they will have to deal with.  It might be a long wire, but
on-chip the capacitance will be fairly small (guess: single pF or less)

Applying a simple T=RC with 1pF and a time constant of 1ns, the resistor is
1K.  Short two of those in series and you have 2K across the power rail.
If the rail is 1.2v, that's 600uA, or 720uW.

I don't think anything is going to cook with that.
Maybe it would be bad if you managed to short a thousand of them, but
it would take some effort to procure the cosmic rays.

Theo

Article: 160061
Subject: Re: Test Driven Design?
From: rickman <gnuarm@gmail.com>
Date: Wed, 17 May 2017 19:48:06 -0400
Links: << >>  << T >>  << A >>
On 5/17/2017 5:17 PM, Theo Markettos wrote:
> rickman <gnuarm@gmail.com> wrote:
>> I can't think of anything about HDL (VHDL in my case as I am not nearly
>> as familiar with Verilog) that would be a hindrance for this.  The
>> verification would be done in a simulator.
>
> I think one awkwardness with Verilog (and I think VHDL) is the nature of an
> 'output'.  To 'output' 42 from a module typically requires handshaking
> signals, which you have to test at the same time as the data.  Getting the
> data right but the handshaking wrong is a serious bug.  What would be a
> simple test in software suddenly requires pattern matching a state machine
> (and maybe fuzzing its inputs).
>
> In C the control flow is always right - your module always returns 42, it
> never returns 42,42,42 or X,42,X.  HDLs like Bluespec decouple the semantic
> content and take care of the control flow, but in Verilog you have to do all
> of it (and test it) by hand.

I don't agree this is an issue.  If the module returns specific data 
timed to the inputs like a C function then it will have handshakes, but 
that is part of the requirement and *must* be tested.  In fact, I could 
see the handshake requirement being in place before the data 
requirement.  Or it is not uncommon to have control and timing modules 
that don't process any data.

Maybe you are describing something that is real and I'm just glossing 
over it.  But I think checking handshakes is something that would be 
solved once and reused across modules once done.  I know I've written 
plenty of test code like that before, I just didn't think to make it 
general purpose.  That would be a big benefit to this sort of testing, 
making the test benches modular so pieces can be reused.  I tend to use 
the module under test to test itself a lot.  A UART transmitter tests a 
UART receiver, an IRIG-B generator tests the IRIG-B receiver (from the 
same design).

I guess this would be a real learning experience to come up with 
efficient ways to develop the test code the same way the module under 
test is developed.  Right now I think I spend as much time on the test 
bench as I do the module.

-- 

Rick C

Article: 160062
Subject: Re: Test Driven Design?
From: Tim Wescott <tim@seemywebsite.really>
Date: Thu, 18 May 2017 00:07:07 -0500
Links: << >>  << T >>  << A >>
On Wed, 17 May 2017 00:47:39 +0100, Theo Markettos wrote:

> Tim Wescott <tim@seemywebsite.really> wrote:
>> Anyone doing any test driven design for FPGA work?
>> 
>> I've gone over to doing it almost universally for C++ development,
>> because It Just Works -- you lengthen the time to integration a bit,
>> but vastly shorten the actual integration time.
>> 
>> I did a web search and didn't find it mentioned -- the traditional
>> "make a test bench" is part way there, but as presented in my textbook*
>> doesn't impose a comprehensive suite of tests on each module.
>> 
>> So is no one doing it, or does it have another name, or an equivalent
>> design process with a different name, or what?
> 
> We do it.  We have an equivalence checker that fuzzes random inputs to
> both the system and an executable 'golden model' of the system, looking
> for discrepancies.  If found, it'll then reduce down to a minimal
> example.
> 
> In particular this is very handy because running the test cases is then
> synthesisable: so we can run the tests on FPGA rather than on a
> simulator.
> 
> Our paper has more details and the code is open source:
> https://www.cl.cam.ac.uk/research/security/ctsrd/pdfs/201509-
memocode2015-bluecheck.pdf
> 

So, you have two separate implementations of the system -- how do you 
know that they aren't both identically buggy?

Or is it that one is carefully constructed to be clear and easy to 
understand (and therefor review) while the other is constructed to 
optimize over whatever constraints you want (size, speed, etc.)?

-- 
www.wescottdesign.com

Article: 160063
Subject: Re: Test Driven Design?
From: Theo Markettos <theom+news@chiark.greenend.org.uk>
Date: 18 May 2017 14:48:12 +0100 (BST)
Links: << >>  << T >>  << A >>
Tim Wescott <tim@seemywebsite.really> wrote:
> So, you have two separate implementations of the system -- how do you 
> know that they aren't both identically buggy?

Is that the problem with any testing framework?
Quis custodiet ipsos custodes?
Who tests the tests?

> Or is it that one is carefully constructed to be clear and easy to 
> understand (and therefor review) while the other is constructed to 
> optimize over whatever constraints you want (size, speed, etc.)?

Essentially that.  You can write a functionally correct but slow
implementation (completely unpipelined, for instance).  You can write an
implementation that relies on things that aren't available in hardware
(a+b*c is easy for the simulator to check, but the hardware implementation
in IEEE floating point is somewhat more complex).  You can also write high
level checks that don't know about implementation (if I enqueue E times and
dequeue D times to this FIFO, the current fill should always be E-D)

It helps if they're written by different people - eg we have 3
implementations of the ISA (hardware, emulator, formal model, plus the spec
and the test suite) that are used to shake out ambiguities: specify first,
write tests, three people implement without having seen the tests, see if
they differ.  Fix the problems, write tests to cover the corner cases. 
Rinse and repeat.

Theo

Article: 160064
Subject: Re: Spartan 6 Digital controlled oscillator
From: Gabor <nospam@nospam.com>
Date: Thu, 18 May 2017 10:17:05 -0400
Links: << >>  << T >>  << A >>
On Wednesday, 5/17/2017 4:52 PM, Gabor wrote:
> On Wednesday, 5/17/2017 4:42 PM, john tra wrote:
>> Hello,
>>
>>          What is the best way to implement a 30 MHz clock generation 
>> circuit that can be dynamically controlled to provide fine frequency 
>> offsets in a Spartan 6, the clock is to be used internally and output 
>> via a pin? Would a DCM provide the functionality and what would the 
>> minimum frequency increment be?
>>
>> Thanks
>> John
>>
> 
> Spartan 6 DCMs are not good for this.  I don't remember if they have a 
> dynamic reconfiguration port, but even so there is no fractional divide 
> capability, so you're stuck with simple (small) integer ratios of the 
> input clock frequency.  Not only that, reconfiguration (even dynamic) 
> requires stopping the clock for some period and allowing re-lock.
> 
> In 7-series parts, including Artix-7, the MMCM provides a fine phase 
> shift that wraps back to zero rather than capping out at some max angle. 
>   It can be used to vary the output frequency over a small range without 
> reprogramming the multiplier/divider of the frequency generator.
> 
> Here's a thread on the Xilinx forums going over the details of this 
> approach:
> 
> https://forums.xilinx.com/t5/7-Series-FPGAs/MMCM-with-smoothly-varying-output-frequency/td-p/678630 
> 
> 

Looking back at that same thread, I see there was a similar solution for
Virtex 2 PRO using 2 DCMs.  Perhaps this approach could be used in
Spartan 6.  Also if you really wanted a broader range of frequency
control, the dual DCM approach could be used to keep the clock running
while one of the DCMs was reprogrammed using the DRP (if it exists in
Spartan 6).

-- 
Gabor

Article: 160065
Subject: Re: Test Driven Design?
From: Tim Wescott <tim@seemywebsite.really>
Date: Thu, 18 May 2017 09:22:38 -0500
Links: << >>  << T >>  << A >>
On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:

> Tim Wescott <tim@seemywebsite.really> wrote:
>> So, you have two separate implementations of the system -- how do you
>> know that they aren't both identically buggy?
> 
> Is that the problem with any testing framework?
> Quis custodiet ipsos custodes?
> Who tests the tests?
> 
>> Or is it that one is carefully constructed to be clear and easy to
>> understand (and therefor review) while the other is constructed to
>> optimize over whatever constraints you want (size, speed, etc.)?
> 
> Essentially that.  You can write a functionally correct but slow
> implementation (completely unpipelined, for instance).  You can write an
> implementation that relies on things that aren't available in hardware
> (a+b*c is easy for the simulator to check, but the hardware
> implementation in IEEE floating point is somewhat more complex).  You
> can also write high level checks that don't know about implementation
> (if I enqueue E times and dequeue D times to this FIFO, the current fill
> should always be E-D)
> 
> It helps if they're written by different people - eg we have 3
> implementations of the ISA (hardware, emulator, formal model, plus the
> spec and the test suite) that are used to shake out ambiguities: specify
> first, write tests, three people implement without having seen the
> tests, see if they differ.  Fix the problems, write tests to cover the
> corner cases. Rinse and repeat.
> 
> Theo

It's a bit different on the software side -- there's a lot more of "poke 
it THIS way, see if it squeaks THAT way".  Possibly the biggest value is 
that (in software at least, but I suspect in hardware) it encourages you 
to keep any stateful information simple, just to make the tests simple -- 
and pure functions are, of course, the easiest.

I need to think about how this applies to my baby-steps project I'm 
working on, if at all.

-- 
www.wescottdesign.com

Article: 160066
Subject: Re: Test Driven Design?
From: lasselangwadtchristensen@gmail.com
Date: Thu, 18 May 2017 09:08:28 -0700 (PDT)
Links: << >>  << T >>  << A >>
Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
> Tim Wescott <tim@seemywebsite.really> wrote:
> > So, you have two separate implementations of the system -- how do you 
> > know that they aren't both identically buggy?
> 
> Is that the problem with any testing framework?
> Quis custodiet ipsos custodes?
> Who tests the tests?

the test?

if two different implementations agree, it adds a bit more confidence that an 
implementation agreeing with itself. 




Article: 160067
Subject: Re: Test Driven Design?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Thu, 18 May 2017 17:14:50 +0100
Links: << >>  << T >>  << A >>
On 18/05/17 15:22, Tim Wescott wrote:
> On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:
>
>> Tim Wescott <tim@seemywebsite.really> wrote:
>>> So, you have two separate implementations of the system -- how do you
>>> know that they aren't both identically buggy?
>>
>> Is that the problem with any testing framework?
>> Quis custodiet ipsos custodes?
>> Who tests the tests?
>>
>>> Or is it that one is carefully constructed to be clear and easy to
>>> understand (and therefor review) while the other is constructed to
>>> optimize over whatever constraints you want (size, speed, etc.)?
>>
>> Essentially that.  You can write a functionally correct but slow
>> implementation (completely unpipelined, for instance).  You can write an
>> implementation that relies on things that aren't available in hardware
>> (a+b*c is easy for the simulator to check, but the hardware
>> implementation in IEEE floating point is somewhat more complex).  You
>> can also write high level checks that don't know about implementation
>> (if I enqueue E times and dequeue D times to this FIFO, the current fill
>> should always be E-D)
>>
>> It helps if they're written by different people - eg we have 3
>> implementations of the ISA (hardware, emulator, formal model, plus the
>> spec and the test suite) that are used to shake out ambiguities: specify
>> first, write tests, three people implement without having seen the
>> tests, see if they differ.  Fix the problems, write tests to cover the
>> corner cases. Rinse and repeat.
>>
>> Theo
>
> It's a bit different on the software side -- there's a lot more of "poke
> it THIS way, see if it squeaks THAT way".  Possibly the biggest value is
> that (in software at least, but I suspect in hardware) it encourages you
> to keep any stateful information simple, just to make the tests simple --
> and pure functions are, of course, the easiest.
>
> I need to think about how this applies to my baby-steps project I'm
> working on, if at all.

Interesting questions with FSMs implemented in software...

Which of the many implementation patterns should
you choose?

My preference is anything that avoids deeply nested
if/the/else/switch statements, since they rapidly
become a maintenance nightmare. (I've seen nesting
10 deep!).

Also, design patterns that enable logging of events
and states should be encouraged and left in the code
at runtime. I've found them /excellent/ techniques for
correctly deflecting blame onto the other party :)

Should you design in a proper FSM style/language
and autogenerate the executable source code, or code
directly in the source language? Difficult, but there
are very useful OOP design patterns that make it easy.

And w.r.t. TDD, should your tests demonstrate the
FSM's design is correct or that the implementation
artefacts are correct?

Naive unit tests often end up testing the individual
low-level implementation artefacts, not the design.
Those are useful when refactoring, but otherwise
are not sufficient.

Article: 160068
Subject: Re: Test Driven Design?
From: rickman <gnuarm@gmail.com>
Date: Thu, 18 May 2017 13:01:53 -0400
Links: << >>  << T >>  << A >>
On 5/18/2017 12:14 PM, Tom Gardner wrote:
> On 18/05/17 15:22, Tim Wescott wrote:
>> On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:
>>
>>> Tim Wescott <tim@seemywebsite.really> wrote:
>>>> So, you have two separate implementations of the system -- how do you
>>>> know that they aren't both identically buggy?
>>>
>>> Is that the problem with any testing framework?
>>> Quis custodiet ipsos custodes?
>>> Who tests the tests?
>>>
>>>> Or is it that one is carefully constructed to be clear and easy to
>>>> understand (and therefor review) while the other is constructed to
>>>> optimize over whatever constraints you want (size, speed, etc.)?
>>>
>>> Essentially that.  You can write a functionally correct but slow
>>> implementation (completely unpipelined, for instance).  You can write an
>>> implementation that relies on things that aren't available in hardware
>>> (a+b*c is easy for the simulator to check, but the hardware
>>> implementation in IEEE floating point is somewhat more complex).  You
>>> can also write high level checks that don't know about implementation
>>> (if I enqueue E times and dequeue D times to this FIFO, the current fill
>>> should always be E-D)
>>>
>>> It helps if they're written by different people - eg we have 3
>>> implementations of the ISA (hardware, emulator, formal model, plus the
>>> spec and the test suite) that are used to shake out ambiguities: specify
>>> first, write tests, three people implement without having seen the
>>> tests, see if they differ.  Fix the problems, write tests to cover the
>>> corner cases. Rinse and repeat.
>>>
>>> Theo
>>
>> It's a bit different on the software side -- there's a lot more of "poke
>> it THIS way, see if it squeaks THAT way".  Possibly the biggest value is
>> that (in software at least, but I suspect in hardware) it encourages you
>> to keep any stateful information simple, just to make the tests simple --
>> and pure functions are, of course, the easiest.
>>
>> I need to think about how this applies to my baby-steps project I'm
>> working on, if at all.
>
> Interesting questions with FSMs implemented in software...
>
> Which of the many implementation patterns should
> you choose?

Personally, I custom design FSM code without worrying about what it 
would be called.  There really are only two issues.  The first is 
whether you can afford a clock delay in the output and how that impacts 
your output assignments.  The second is the complexity of the code 
(maintenance).


> My preference is anything that avoids deeply nested
> if/the/else/switch statements, since they rapidly
> become a maintenance nightmare. (I've seen nesting
> 10 deep!).

Such deep layering likely indicates a poor problem decomposition, but it 
is hard to say without looking at the code.

Normally there is a switch for the state variable and conditionals 
within each case to evaluate inputs.  Typically this is not so complex.


> Also, design patterns that enable logging of events
> and states should be encouraged and left in the code
> at runtime. I've found them /excellent/ techniques for
> correctly deflecting blame onto the other party :)
>
> Should you design in a proper FSM style/language
> and autogenerate the executable source code, or code
> directly in the source language? Difficult, but there
> are very useful OOP design patterns that make it easy.

Designing in anything other than the HDL you are using increases the 
complexity of backing up your tools.  In addition to source code, it can 
be important to be able to restore the development environment.  I don't 
bother with FSM tools other than tools that help me think.


> And w.r.t. TDD, should your tests demonstrate the
> FSM's design is correct or that the implementation
> artefacts are correct?

I'll have to say that is a new term to me, "implementation 
artefacts[sic]".  Can you explain?

I test behavior.  Behavior is what is specified for a design, so why 
would you test anything else?


> Naive unit tests often end up testing the individual
> low-level implementation artefacts, not the design.
> Those are useful when refactoring, but otherwise
> are not sufficient.


-- 

Rick C

Article: 160069
Subject: Re: Test Driven Design?
From: rickman <gnuarm@gmail.com>
Date: Thu, 18 May 2017 13:05:40 -0400
Links: << >>  << T >>  << A >>
On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
> Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
>> Tim Wescott <tim@seemywebsite.really> wrote:
>>> So, you have two separate implementations of the system -- how do you
>>> know that they aren't both identically buggy?
>>
>> Is that the problem with any testing framework?
>> Quis custodiet ipsos custodes?
>> Who tests the tests?
>
> the test?
>
> if two different implementations agree, it adds a bit more confidence that an
> implementation agreeing with itself.

The point is if both designs were built with the same misunderstanding 
of the requirements, they could both be wrong.  While not common, this 
is not unheard of.  It could be caused by cultural biases (each company 
is a culture) or a poorly written specification.

-- 

Rick C

Article: 160070
Subject: Re: Test Driven Design?
From: Tim Wescott <seemywebsite@myfooter.really>
Date: Thu, 18 May 2017 12:13:18 -0500
Links: << >>  << T >>  << A >>
On Thu, 18 May 2017 13:05:40 -0400, rickman wrote:

> On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
>> Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
>>> Tim Wescott <tim@seemywebsite.really> wrote:
>>>> So, you have two separate implementations of the system -- how do you
>>>> know that they aren't both identically buggy?
>>>
>>> Is that the problem with any testing framework?
>>> Quis custodiet ipsos custodes?
>>> Who tests the tests?
>>
>> the test?
>>
>> if two different implementations agree, it adds a bit more confidence
>> that an implementation agreeing with itself.
> 
> The point is if both designs were built with the same misunderstanding
> of the requirements, they could both be wrong.  While not common, this
> is not unheard of.  It could be caused by cultural biases (each company
> is a culture) or a poorly written specification.

Yup.  Although testing the real, obscure and complicated thing against 
the fake, easy to read and understand thing does sound like a viable 
test, too.

Prolly should both hit the thing with known test vectors written against 
the spec, and do the behavioral vs. actual sim, too.

-- 

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Article: 160071
Subject: Re: Test Driven Design?
From: Jan Coombs <jenfhaomndgfwutc@murmic.plus.com>
Date: Thu, 18 May 2017 19:03:30 +0100
Links: << >>  << T >>  << A >>
On Tue, 16 May 2017 15:21:49 -0500
Tim Wescott <tim@seemywebsite.really> wrote:

> Anyone doing any test driven design for FPGA work?

If you do hardware design with an interpretive language, then
test driven design is essential:

  http://docs.myhdl.org/en/stable/manual/unittest.html

My hobby project is long and slow, but I think this discipline
is slowly improving my productivity. 

Jan Coombs
 


Article: 160072
Subject: Re: Test Driven Design?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Thu, 18 May 2017 23:06:30 +0100
Links: << >>  << T >>  << A >>
On 18/05/17 18:01, rickman wrote:
> On 5/18/2017 12:14 PM, Tom Gardner wrote:
>> On 18/05/17 15:22, Tim Wescott wrote:
>>> On Thu, 18 May 2017 14:48:12 +0100, Theo Markettos wrote:
>>>
>>>> Tim Wescott <tim@seemywebsite.really> wrote:
>>>>> So, you have two separate implementations of the system -- how do you
>>>>> know that they aren't both identically buggy?
>>>>
>>>> Is that the problem with any testing framework?
>>>> Quis custodiet ipsos custodes?
>>>> Who tests the tests?
>>>>
>>>>> Or is it that one is carefully constructed to be clear and easy to
>>>>> understand (and therefor review) while the other is constructed to
>>>>> optimize over whatever constraints you want (size, speed, etc.)?
>>>>
>>>> Essentially that.  You can write a functionally correct but slow
>>>> implementation (completely unpipelined, for instance).  You can write an
>>>> implementation that relies on things that aren't available in hardware
>>>> (a+b*c is easy for the simulator to check, but the hardware
>>>> implementation in IEEE floating point is somewhat more complex).  You
>>>> can also write high level checks that don't know about implementation
>>>> (if I enqueue E times and dequeue D times to this FIFO, the current fill
>>>> should always be E-D)
>>>>
>>>> It helps if they're written by different people - eg we have 3
>>>> implementations of the ISA (hardware, emulator, formal model, plus the
>>>> spec and the test suite) that are used to shake out ambiguities: specify
>>>> first, write tests, three people implement without having seen the
>>>> tests, see if they differ.  Fix the problems, write tests to cover the
>>>> corner cases. Rinse and repeat.
>>>>
>>>> Theo
>>>
>>> It's a bit different on the software side -- there's a lot more of "poke
>>> it THIS way, see if it squeaks THAT way".  Possibly the biggest value is
>>> that (in software at least, but I suspect in hardware) it encourages you
>>> to keep any stateful information simple, just to make the tests simple --
>>> and pure functions are, of course, the easiest.
>>>
>>> I need to think about how this applies to my baby-steps project I'm
>>> working on, if at all.
>>
>> Interesting questions with FSMs implemented in software...
>>
>> Which of the many implementation patterns should
>> you choose?
>
> Personally, I custom design FSM code without worrying about what it would be
> called.  There really are only two issues.  The first is whether you can afford
> a clock delay in the output and how that impacts your output assignments.  The
> second is the complexity of the code (maintenance).
>
>
>> My preference is anything that avoids deeply nested
>> if/the/else/switch statements, since they rapidly
>> become a maintenance nightmare. (I've seen nesting
>> 10 deep!).
>
> Such deep layering likely indicates a poor problem decomposition, but it is hard
> to say without looking at the code.

It was a combination of technical and personnel factors.
The overriding business imperative was, at each stage,
to make the smallest and /incrementally/ cheapest modification.

The road to hell is paved with good intentions.


> Normally there is a switch for the state variable and conditionals within each
> case to evaluate inputs.  Typically this is not so complex.

This was an inherently complex task that was ineptly
implemented. I'm not going to define how ineptly,
because you wouldn't believe it. I only believe it
because I saw it, and boggled.


>> Also, design patterns that enable logging of events
>> and states should be encouraged and left in the code
>> at runtime. I've found them /excellent/ techniques for
>> correctly deflecting blame onto the other party :)
>>
>> Should you design in a proper FSM style/language
>> and autogenerate the executable source code, or code
>> directly in the source language? Difficult, but there
>> are very useful OOP design patterns that make it easy.
>
> Designing in anything other than the HDL you are using increases the complexity
> of backing up your tools.  In addition to source code, it can be important to be
> able to restore the development environment.  I don't bother with FSM tools
> other than tools that help me think.

Very true. I use that argument, and more, to caution
people against inventing Domain Specific Languages
when they should be inventing Domain Specific Libraries.

Guess which happened in the case I alluded to above.


>> And w.r.t. TDD, should your tests demonstrate the
>> FSM's design is correct or that the implementation
>> artefacts are correct?
>
> I'll have to say that is a new term to me, "implementation artefacts[sic]".  Can
> you explain?

Nothing non-obvious. An implementation artefact is
something that is part of /a/ specific design implementation,
as opposed to something that is an inherent part of
/the/ problem.


> I test behavior.  Behavior is what is specified for a design, so why would you
> test anything else?

Clearly you haven't practiced XP/Agile/Lean development
practices.

You sound like a 20th century hardware engineer, rather
than a 21st century software "engineer". You must learn
to accept that all new things are, in every way, better
than the old ways.

Excuse me while I go and wash my mouth out with soap.


>> Naive unit tests often end up testing the individual
>> low-level implementation artefacts, not the design.
>> Those are useful when refactoring, but otherwise
>> are not sufficient.


Article: 160073
Subject: Re: Test Driven Design?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Thu, 18 May 2017 23:10:46 +0100
Links: << >>  << T >>  << A >>
On 18/05/17 18:05, rickman wrote:
> On 5/18/2017 12:08 PM, lasselangwadtchristensen@gmail.com wrote:
>> Den torsdag den 18. maj 2017 kl. 15.48.19 UTC+2 skrev Theo Markettos:
>>> Tim Wescott <tim@seemywebsite.really> wrote:
>>>> So, you have two separate implementations of the system -- how do you
>>>> know that they aren't both identically buggy?
>>>
>>> Is that the problem with any testing framework?
>>> Quis custodiet ipsos custodes?
>>> Who tests the tests?
>>
>> the test?
>>
>> if two different implementations agree, it adds a bit more confidence that an
>> implementation agreeing with itself.
>
> The point is if both designs were built with the same misunderstanding of the
> requirements, they could both be wrong.  While not common, this is not unheard
> of.  It could be caused by cultural biases (each company is a culture) or a
> poorly written specification.

The prior question is whether the specification is correct.

Or more realistically, to what extent it is/isn't correct,
and the best set of techniques and processes for reducing
the imperfection.

And that leads to XP/Agile concepts, to deal with the suboptimal
aspects of Waterfall Development.

Unfortunately the zealots can't accept that what you gain
on the swings you lose on the roundabouts.


Article: 160074
Subject: Re: Test Driven Design?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Thu, 18 May 2017 23:15:32 +0100
Links: << >>  << T >>  << A >>
On 18/05/17 19:03, Jan Coombs wrote:
> On Tue, 16 May 2017 15:21:49 -0500
> Tim Wescott <tim@seemywebsite.really> wrote:
>
>> Anyone doing any test driven design for FPGA work?
>
> If you do hardware design with an interpretive language, then
> test driven design is essential:
>
>   http://docs.myhdl.org/en/stable/manual/unittest.html
>
> My hobby project is long and slow, but I think this discipline
> is slowly improving my productivity.

It doesn't matter in the slightest whether or not the
language is interpreted.

Consider that, for example, C is (usually) compiled to
assembler. That assembler is then interpreted by microcode
(or more modern equivalent!) into RISC operations, which
is then interpreted by hardware.




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search