Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive

Compare FPGA features and resources

Threads starting:

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Simple accumulator (DDS - nothing fancy) and a DAC. If you want to be simpler, there's a huge number of parameters that determine what's "good" and what isn't. Is the frequency variable across a huge range? Many Decades or half an octave? What kind of harmonics are permitted? ...the purer, the tougher. Noise floor? Power supply rejection? Modulation rate? How about a VCO with a voltage-tuned filter? Maybe just a bunch of resistors from a few output? There's just too much not known about what's considered "good" for your needs. <burn.sir@gmail.com> wrote in message news:1154546507.601031.265860@p79g2000cwp.googlegroups.com... > hello everybody, > > > are there any _simple_ ways of generating a sine waveform, maybe from a > reference pulse/saw/triangle signal? > > (Thanks to Ray :) I know that there are more complicated ways of doing > this, but for now i am interested in the simplest possible way for > getting something that looks like a sine. > > actually, i am also interested on algorithms for generating any type of > periodic signals (both low and high on harmonics) that works well in > FPGAs. Jitter and variations are ok (in fact, they are welcome up to > some degree). > > > (for the record, it will be used for sound synthesis) > > > regards, -Burns >

wow, that was one quick answer :) Nop, sorry. LUTs wont do. Too much logic for linear-filtering and the result is still crappy, IMO. There is no D/A, every thing is pure digital. The LUT solution would make it "too" pure. I was thinking of something like a unstable feedback loop that hopefully doesnt require a multiplier/divider to work. I had another idea about simple filtering (e.g. next=current + target/2) of a pulse wave, but cant adjust the frequency for it. -bruns Ralf Hildebrandt wrote: > burn.sir@gmail.com wrote: > > > are there any _simple_ ways of generating a sine waveform, maybe from a > > reference pulse/saw/triangle signal? > > What about lookup-tables? After D/A conversion one should get a > sine-like signal. > > Ralf

burn.sir@gmail.com a écrit : > hello everybody, > > > are there any _simple_ ways of generating a sine waveform, maybe from a > reference pulse/saw/triangle signal? Hi I once made a _very_ crude sin & cos generator (for short simulations) with only 2 adders : ... signal c, s : signed(SIZE-1 downto 0); ... process (clk, rst) variable ds, dc : signed(SIZE-1 downto 0); begin if rst = '1' then c <= (c'left => '0', others => 1); -- cos(0) = 1 s <= (others => '0'); -- sin(0) = 0 elsif rising_edge(clk) then ds := c/8; dc := -(s/8); c <= c + dc; s <= s + ds; end if; end process; Hope this helps. Nicolas

Its for sound synthesis, not radar. And its for a hobby project, not job related. Hence, I am looking for the quick and dirty solution for now. If this was serious stuff, I would use DSP blocks or cordic algorithms :) Frequency range is 40 to 96K Hz, resolution is 12 bits for now (24-32 internal). The sampling frequency is low enough to serialize multipliers for this, but i wanted to check if there were any other solutions first. No power supply problems, the signal is digital all the way (no DAC). Neither is noise a problem, as long as it is random (thats why i don't like lookup tables). I have a pulse and a sawtooth running at the same frequency. Is there any way i can use them for this? i can multiply their frequency by a power of two by simple bit-shifting... bruns John_H wrote: > Simple accumulator (DDS - nothing fancy) and a DAC. > > If you want to be simpler, there's a huge number of parameters that > determine what's "good" and what isn't. Is the frequency variable across a > huge range? Many Decades or half an octave? What kind of harmonics are > permitted? ...the purer, the tougher. Noise floor? Power supply > rejection? Modulation rate? How about a VCO with a voltage-tuned filter? > Maybe just a bunch of resistors from a few output? > > There's just too much not known about what's considered "good" for your > needs. > > > <burn.sir@gmail.com> wrote in message > news:1154546507.601031.265860@p79g2000cwp.googlegroups.com... > > hello everybody, > > > > > > are there any _simple_ ways of generating a sine waveform, maybe from a > > reference pulse/saw/triangle signal? > > > > (Thanks to Ray :) I know that there are more complicated ways of doing > > this, but for now i am interested in the simplest possible way for > > getting something that looks like a sine. > > > > actually, i am also interested on algorithms for generating any type of > > periodic signals (both low and high on harmonics) that works well in > > FPGAs. Jitter and variations are ok (in fact, they are welcome up to > > some degree). > > > > > > (for the record, it will be used for sound synthesis) > > > > > > regards, -Burns > >

On 2006-08-02, burn.sir@gmail.com <burn.sir@gmail.com> wrote: > > are there any _simple_ ways of generating a sine waveform, maybe from a > reference pulse/saw/triangle signal? There are schemes for choosing ladder-style DAC resistor values such that an output ramp generates a sine wave. A quick google turned up a similar idea with illustration at: http://www.edn.com/contents/images/6321527f1.pdf I seem to recall a recent thread on one of the electronics newsgroups where someone had written a program to select optimal resistor values for this sort of thing. -- Ben Jackson AD7GD <ben@ben.com> http://www.ben.com/

> are there any _simple_ ways of generating a sine waveform, maybe from a > reference pulse/saw/triangle signal? Use a Numerically Controlled Oscillator(NCO). All it really is an 'n' bit accumulator that accumulates phase. The range of the accumulator represents one complete cycle of the waveform you are generating. I.e. 0-360 degrees, or 0 to 2pi. Then use the 'm' most significant bits of the phase(accumulator) to address into a LUT or BRAM lookup table containing the waveform shape. The phase accumulator clock frequency sets the output jitter. Higher the clock, less jitter. If you want a constant frequency, then accumulate a constant phase each cycle. This constant sets your output frequency. 'n' controls the frequency resolution. I guess 'm' controls the "accuracy" of your waveform shape. If you want a square wave you don't even need the waveform lookup table, just use the most significant bit.

Thats a nice one. It works really well with floating points: http://www.musicdsp.org/archive.php?classid=3D1#10 the problem is the frequency parameter calculation (your "magic" 8 here). maybe I can use a clock-enable to vary frequency and do some sort of interpolation on top of that? Nicolas Matringe wrote: > burn.sir@gmail.com a =E9crit : > > hello everybody, > > > > > > are there any _simple_ ways of generating a sine waveform, maybe from a > > reference pulse/saw/triangle signal? > > Hi > I once made a _very_ crude sin & cos generator (for short simulations) > with only 2 adders : > > ... > signal c, s : signed(SIZE-1 downto 0); > ... > process (clk, rst) > variable ds, dc : signed(SIZE-1 downto 0); > begin > if rst =3D '1' then > c <=3D (c'left =3D> '0', others =3D> 1); -- cos(0) =3D 1 > s <=3D (others =3D> '0'); -- sin(0) =3D 0 > elsif rising_edge(clk) then > ds :=3D c/8; > dc :=3D -(s/8); > c <=3D c + dc; > s <=3D s + ds; > end if; > end process; >=20 > Hope this helps. >=20 > Nicolas

Nitesh wrote: > I have an IP core made up of verilog files as well as vhdl files. How > do I set my mpd file for the bitstream generation in XPS? > I tried putting > OPTION HDL =MIXED > OPTION STLYE = HDL > but it doesnt work . > Anyone done this before. > Thanks, > Nitesh MPD file can be generated by XPS. You just need to tell XPS which HDL file you need when import peripheral ... Wayne

burn.sir@gmail.com wrote: > hello everybody, > are there any _simple_ ways of generating a sine waveform, maybe from a > reference pulse/saw/triangle signal? > (Thanks to Ray :) I know that there are more complicated ways of doing > this, but for now i am interested in the simplest possible way for > getting something that looks like a sine. > actually, i am also interested on algorithms for generating any type of > periodic signals (both low and high on harmonics) that works well in > FPGAs. Jitter and variations are ok (in fact, they are welcome up to > some degree). > (for the record, it will be used for sound synthesis) Look for the Xilinx DDS core. If I remember right, you could use it even with Webpack -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Hi Weng, This class is about synchronous logic design using FPGAs. All of my examples are completely synchronous; hopefully the diagrams on pages 7, 8, 29, and 30 convey this information -- not in words -- but by the common clock that clocks all flip flops in the circuit. While there are such things as asynchronous pipelines, they are not something I would want to discuss with undergraduate students who aren't sure about the difference between a flip-flop and a latch! For your first question -- 22 ns is correct. A synchronous pipeline cannot run with a cycle time that is lower than the cycle time you compute for any particular segment. It may have been Henry Ford who first realized this. As you point out, the first stage (in isolation) can operate with a cycle time of 2+9=11 ns. The second stage (in isolation) can operate with a cycle time of 5+3=8 ns. However, when joined to form a synchronous pipeline, it is the stage with the 11 ns delay that sets the minimum cycle time for the complete pipeline. Two cycles of 11 ns yields 22 ns. I do not understand your comment regarding page 37. Can you clarify what you meant? Thanks, Eric "Weng Tianxiang" <wtxwtx@gmail.com> wrote in message news:1154480573.648987.69380@p79g2000cwp.googlegroups.com... > Hi Eric, > I have difficulty understanding your slide p. 35. > > It relates to calculation of latency in ns: > 2 - 9; - 1 > | | | > 6; 5 3; > > ',' is a register. Number is number of delays. > > First pipeline delay should be 2+9; The 2nd pipeline delay should be > 5+3. > > Total delay should be 19. But your slide says it is 22 ns. > > Page 37 is totally wrong using pipeline registers. Because they are not > synchronous at all. There is a short cut from the entry to the end > point.

So you want a full scale, 12-bit, digital output, unpure sinewave with 72 dB harmonic rejection? The frequency range you specified is for analog sample rate, correct? What analog frequency range do you want to produce? Do you want to modulate it at a high rate? What's your system sample rate? I thing I saw IIRs used for sin/cos generation in an oscillator-like configuration in a textbook 20 years ago, but outside of that there aren't any magic bullets. Perhaps a LUT with super-pure output can be dirtied by adding pseudorandom dither? Perhaps a tiny LUT with simple interpolation? The main clock rate to audio sample rate should give you ample calculation time. Heck - you could even do a taylor series expansion and do it all with multiplies using a DDS accumulator as the phase source. <burn.sir@gmail.com> wrote in message news:1154549526.832200.125740@i3g2000cwc.googlegroups.com... > Its for sound synthesis, not radar. And its for a hobby project, not > job related. Hence, I am looking for the quick and dirty solution for > now. If this was serious stuff, I would use DSP blocks or cordic > algorithms :) > > > Frequency range is 40 to 96K Hz, resolution is 12 bits for now (24-32 > internal). The sampling frequency is low enough to serialize > multipliers for this, but i wanted to check if there were any other > solutions first. > > No power supply problems, the signal is digital all the way (no DAC). > Neither is noise a problem, as long as it is random (thats why i don't > like lookup tables). > > > > I have a pulse and a sawtooth running at the same frequency. Is there > any way i can use them for this? i can multiply their frequency by a > power of two by simple bit-shifting... > > bruns > > > John_H wrote: >> Simple accumulator (DDS - nothing fancy) and a DAC. >> >> If you want to be simpler, there's a huge number of parameters that >> determine what's "good" and what isn't. Is the frequency variable across >> a >> huge range? Many Decades or half an octave? What kind of harmonics are >> permitted? ...the purer, the tougher. Noise floor? Power supply >> rejection? Modulation rate? How about a VCO with a voltage-tuned >> filter? >> Maybe just a bunch of resistors from a few output? >> >> There's just too much not known about what's considered "good" for your >> needs. >> >> >> <burn.sir@gmail.com> wrote in message >> news:1154546507.601031.265860@p79g2000cwp.googlegroups.com... >> > hello everybody, >> > >> > >> > are there any _simple_ ways of generating a sine waveform, maybe from a >> > reference pulse/saw/triangle signal? >> > >> > (Thanks to Ray :) I know that there are more complicated ways of doing >> > this, but for now i am interested in the simplest possible way for >> > getting something that looks like a sine. >> > >> > actually, i am also interested on algorithms for generating any type of >> > periodic signals (both low and high on harmonics) that works well in >> > FPGAs. Jitter and variations are ok (in fact, they are welcome up to >> > some degree). >> > >> > >> > (for the record, it will be used for sound synthesis) >> > >> > >> > regards, -Burns >> > >

burn.sir@gmail.com wrote: > wow, that was one quick answer :) > > > Nop, sorry. LUTs wont do. Too much logic for linear-filtering and the > result is still crappy, IMO. There is no D/A, every thing is pure > digital. The LUT solution would make it "too" pure. > > I was thinking of something like a unstable feedback loop that > hopefully doesnt require a multiplier/divider to work. > That sounds a bit like a Goertzel filter (IIR) which does require a multiplier to work: (http://www.dattalo.com/technical/theory/sinewave.html). A CORDIC circuit could probably do the job, though requires approximately as many cycles per output as bits of precision you need. No multiplier though. Cheers, Andy.

John_H wrote: > So you want a full scale, 12-bit, digital output, unpure sinewave with 72 dB > harmonic rejection? The frequency range you specified is for analog sample > rate, correct? What analog frequency range do you want to produce? Do you > want to modulate it at a high rate? What's your system sample rate? > > I thing I saw IIRs used for sin/cos generation in an oscillator-like > configuration in a textbook 20 years ago, but outside of that there aren't yes, look at the other posts in this thread. > any magic bullets. Perhaps a LUT with super-pure output can be dirtied by > adding pseudorandom dither? Perhaps a tiny LUT with simple interpolation? Been there, done that. For physcoacoustic reasons, it doesnt sound "right". The missing higher harmonics are much larger than you can burry in noise. And you will have other problems at the highest and the lowest pitches. Of course, it depends on how large your LUT is. But I really didnt want to do wavetables in the first place. > The main clock rate to audio sample rate should give you ample calculation > time. Heck - you could even do a taylor series expansion and do it all with > multiplies using a DDS accumulator as the phase source. Yes, ~300 clocks per sample is a lot of time... regards, -Burns

John_H wrote: > So you want a full scale, 12-bit, digital output, unpure sinewave with 72 dB > harmonic rejection? The frequency range you specified is for analog sample > rate, correct? What analog frequency range do you want to produce? Do you > want to modulate it at a high rate? What's your system sample rate? > > I thing I saw IIRs used for sin/cos generation in an oscillator-like > configuration in a textbook 20 years ago, but outside of that there aren't yes, look at the other posts in this thread. > any magic bullets. Perhaps a LUT with super-pure output can be dirtied by > adding pseudorandom dither? Perhaps a tiny LUT with simple interpolation? Been there, done that. For physcoacoustic reasons, it doesnt sound "right". The missing higher harmonics are much larger than you can burry in noise. And you will have other problems at the highest and the lowest pitches. Of course, it depends on how large your LUT is. But I really didnt want to do wavetables in the first place. > The main clock rate to audio sample rate should give you ample calculation > time. Heck - you could even do a taylor series expansion and do it all with > multiplies using a DDS accumulator as the phase source. Yes, ~300 clocks per sample is a lot of time... regards, -Burns

"The missing higher harmonics are much larger than you can [bury] in noise." What in the heck are you asking for here? Dirty sinewaves have harmonics. You suggest you want 72dB (i.e. full) supression of harmonics. Then the missing harmonics are too high? What lower amplitude - in dB full scale - do you need your missing harmonics to be so they aren't too high? I'd love to help but it sounds like you don't have a clue what you really want. If you do, I certainly haven't gotten a clue what your true need is. <burn.sir@gmail.com> wrote in message news:1154555216.655925.310220@m73g2000cwd.googlegroups.com... > John_H wrote: >> So you want a full scale, 12-bit, digital output, unpure sinewave with 72 >> dB >> harmonic rejection? The frequency range you specified is for analog >> sample >> rate, correct? What analog frequency range do you want to produce? Do >> you >> want to modulate it at a high rate? What's your system sample rate? >> >> I [think] I saw IIRs used for sin/cos generation in an oscillator-like >> configuration in a textbook 20 years ago, but outside of that there >> aren't > > yes, look at the other posts in this thread. > >> any magic bullets. Perhaps a LUT with super-pure output can be dirtied >> by >> adding pseudorandom dither? Perhaps a tiny LUT with simple >> interpolation? > > Been there, done that. For physcoacoustic reasons, it doesnt sound > "right". The missing higher harmonics are much larger than you can > burry in noise. And you will have other problems at the highest and the > lowest pitches. > > Of course, it depends on how large your LUT is. But I really didnt want > to do wavetables in the first place. > >> The main clock rate to audio sample rate should give you ample >> calculation >> time. Heck - you could even do a taylor series expansion and do it all >> with >> multiplies using a DDS accumulator as the phase source. > > Yes, ~300 clocks per sample is a lot of time... > > > > regards, -Burns >

Hi Crabill, Thank you for your response. Explanation of your slide 35 is right. Now I know how you calculate slide 37 timing. But in real designs, the way flip flops move forward or backward as your slides show are very dangerous and must very be careful and clearly check all signals' validality. There must be some critiaria over there to govern the movements. I really would like to know the rules. In my design, whether signals are valid or not is based on every clock. 1 clock delay for one signal may cause big troube and it usually takes me a lot of times to check and sometimes it relates to modifying related logic equations and even state machines. Thank you. Weng

Hi Thomas, When Huffman frequency table is available, the decoding procedure is stright forward. I know many electronic books use Huffman encoding method to compress the book full context. Usually in the first pass of full context, Huffman frequency table is established , in the 2nd pass, a Hash function is used to locate a new word entry, then encode, based on the Huffman frequency table, and output it. I want to know how Hash function is used in the 2nd pass. Any books or tips? Thank you. Weng Thomas Entner wrote: > > Fax application cannot be counted as full Huffman encoding. The reason > > is the statistics for characters are not dynamically collected. > > Also for JPEG, etc. in most cases the predefined default Huffman-tables are > used (Althought the file-format would support custom tables). I think the > term "Huffman" is also widely used for such "static" applications. > > Regards, > > Thomas > > P.S.: I am wondering what the reason for your question is? > > www.entner-electronics.com

Hello All, I'm on a project which has ballooned in scope such that it calls for implementing the MicroBlaze (originally it called for a straight forward PicoBlaze, but that's a long story ...) I have successfully created custom peripherals and have implemented the code to control them from my device MMI. Thus, I'm mildly savvy wrt Xilinx EDK. However, I cannot get my SPI (EEPROM) peripheral to play nice-nice with the rest of the project. The transfer will begin successfully via XSpi_Transfer(...) but SS (CS) never de-asserts. The only work around I have come up with is XSpi_Reset(...), but that is pretty ugly and will more than likely present problems further down the road. I have a suspicion that I am not initalizing the SPI status handler or the interrupt controller correctly. I'd be the first to admit that I haven't dealt with interrupts since a really crappy lab on them in undergrad .... I apologize in advance if I haven't furnished enough information to help me. I'm more of a squishy analog hardware type. Any advice or application notes that are applicable would be greatly appreciated. Thanks, Matt

Hi Weng, > ... in real designs, the way flip flops move forward or backward as > your slides show are very dangerous and must very be careful and > clearly check all signals' validality. There must be some critiaria over > there to govern the movements. I really would like to know the rules. This is an excellent question, the answer is not in my handout, but I do address it during the lecture. While pipelining a circuit, you cannot draw arbitrary lines and call it a pipeline stage. For retiming a circuit, you cannot randomly move registers around. The following rules are what I discuss in class. I believe they are sufficient to avoid an incorrect circuit but I cannot claim they are by any means "optimal". In fact, they are very simple and do not address how you might modify the circuit to correct for pipeline hazards. For a proposed pipeline stage: 1. The stage must completely cut the circuit. I tell students to visualize the circuit on a piece of paper, and visualize using scissors to cut along the proposed boundary. The circuit must be cut into separate pieces. 2. I tell students to label one side of the proposed boundary "A" and the other side "B". I then tell students to evaluate each signal that crosses the boundary to determine the flow of information. All information must flow through the boundary in the same direction. A boundary represents flip flops that capture data from one stage and provide it to the next. I pose retiming as an extension of pipelining -- a perfectly pipelined circuit does not benefit from retiming. Retiming is going back to a (poorly or haphazardly) pipelined design and trying to redraw the pipeline boundaries to gain some advantage by moving delay out of the critical path. I ask students to assume all flip flops in the design to be retimed are part of a pipeline stage. They don't know how many stages, or which flip flop are associated with a given stage, because this information is not available. However, at any flip flop, it's clear which direction the information is moving. To validate a potential retiming move, I tell them to visualize reaching "through" the flip flop (which is a pipeline boundary) to grab logic on the other side (doesn't matter on what side we started) and then physically pull the logic through the flip flop. Along with the logic are going to come signals feeding that logic. Those signals are now passing through the pipeline stage, the original flip flop is replaced by new flip flop(s) where the signals have been pulled through the boundary. The test for validity of this move is to revisit check (2) for pipelining. I know which direction the information was moving originally. For the retiming move to be valid, all the information crossing the boundary must be in the same direction as it was originally. It is much easier to illustrate on the chalkboard using my hands and making slurping sound effects, which is why I did not include an essay like this in the lecture notes. Eric

Weng Tianxiang wrote: > Hi Thomas, > When Huffman frequency table is available, the decoding procedure is > stright forward. > > I know many electronic books use Huffman encoding method to compress > the book full context. > > Usually in the first pass of full context, Huffman frequency table is > established , in the 2nd pass, a Hash function is used to locate a new > word entry, then encode, based on the Huffman frequency table, and > output it. > > I want to know how Hash function is used in the 2nd pass. > I think you are asking: how do I generate the huffman codes given the frequency of each symbol. If so look at: http://en.wikipedia.org/wiki/Huffman_coding in the section "basic technique". Cheers, Andy.

Andy, No. I want to know the hash function. For example, I have a Huffman encoding table as follows: name Fre encode ... Andy 15, "011", Ray 20, "110", ... Now a text string "Andy Ray wrote" is incoming, I want to know how to match the incoming "Andy" and the table entry "Andy". One must establish a 1-to-1 relationship between the incoming word and the table entry. What I can imagine to do in software is to create a hash function, then if there are more entries responding to one entry, further string match must be made. I am wondering about whether or not a hardware design can does it. Thank you. Weng Andy Ray wrote: > Weng Tianxiang wrote: > > Hi Thomas, > > When Huffman frequency table is available, the decoding procedure is > > stright forward. > > > > I know many electronic books use Huffman encoding method to compress > > the book full context. > > > > Usually in the first pass of full context, Huffman frequency table is > > established , in the 2nd pass, a Hash function is used to locate a new > > word entry, then encode, based on the Huffman frequency table, and > > output it. > > > > I want to know how Hash function is used in the 2nd pass. > > > > > I think you are asking: how do I generate the huffman codes given the > frequency of each symbol. If so look at: > > http://en.wikipedia.org/wiki/Huffman_coding > > in the section "basic technique". > > Cheers, > > Andy.

burn.sir@gmail.com wrote: > Its for sound synthesis, not radar. And its for a hobby project, not > job related. Hence, I am looking for the quick and dirty solution for > now. If this was serious stuff, I would use DSP blocks or cordic > algorithms > > > Frequency range is 40 to 96K Hz, resolution is 12 bits for now (24-32 > internal). The sampling frequency is low enough to serialize > multipliers for this, but i wanted to check if there were any other > solutions first. > > No power supply problems, the signal is digital all the way (no DAC). > Neither is noise a problem, as long as it is random (thats why i don't > like lookup tables). Seems a very sweeping statement - lookup tables have many forms, and so long as they can get to under you noise floor, and with a spectrum outside the band of interest, they should be valid. It might be better to analyse how large a LUT needs to be, for a given TT quantize in Time, and VV quantize in Voltage, to achieve the desired noise floor. > I have a pulse and a sawtooth running at the same frequency. Is there > any way i can use them for this? i can multiply their frequency by a > power of two by simple bit-shifting... We've used Triangle waves (up/Down counters) to generate Sine, as the triangle is the closest to sine, so gives smaller tables. It's a while ago, but I also did some checks on the ideal Time quantize for lowest distortion : ie you choose the dT level not by binary, or angle values, but by what gives the lowest distortion thru a table. From memory, that was close to matching the slopes of Triangle and sine, so that the LUT errors in the straightest portions of sine, were minimized. A table in a FPGA could easily fetch multiple items, still with very simple support logic. Voltage LUT is the primary content, but you could have LUT for dT, to control the width of each dV step (thus saving dV precision), and even a third table for INC rate, if you want to move from Step-Tables, to Slope tables. -jg

Weng Tianxiang wrote: > Andy, > No. I want to know the hash function. > > For example, I have a Huffman encoding table as follows: > > name Fre encode > ... > Andy 15, "011", > Ray 20, "110", > ... > > Now a text string "Andy Ray wrote" is incoming, I want to know how to > match the incoming "Andy" and the table entry "Andy". > > One must establish a 1-to-1 relationship between the incoming word and > the table entry. > > What I can imagine to do in software is to create a hash function, then > if there are more entries responding to one entry, further string match > must be made. A trie might be an option. I once had a conversation with someone who worked at a company that needed to match strings quickly in hardware and I believe they might've used a trie for that, although I could be remembering wrong since it was a few years ago. If you haven't encountered tries, think of them as deterministic finite-state automata shaped like a tree, where the root is the start state, and transitions (edges) from one node to the next correspond to symbols on your input. For example, if you want to recognize the strings "car", "cat", and "cup", your trie would look like this: (start)--->( )-+--->( )----->(end) c | u p | +--->( )-+--->(end) a | r | +--->(end) t Basically, you start at the start state, absorb a token and follow the edge associated with it, and repeat until you either don't have a matching edge or hit an end state. The nice thing about a trie is that you don't have to worry about collisions or worst-case behavior. The time to match is a linear function of the length of the string you're matching. A straightforward trie where every input symbol is a character (hence, probably 8 bits or maybe even larger) might be a little wasteful of space. But there are ways around that. The most obvious one is to say that the input is a string of bits and each bit is an input symbol. This makes your tree 8 times as deep as with 8-bit characters as input symbols, but it has the advantage that each node can have no more than 2 children (as compared to 256). A nice middle ground might be to make your input symbols pairs of bits or sequences of 4 bits. - Logan

Hi Logan, Your method is interesting, but is far from what I am looking for. It is too slow. My target is 64-bit inputs on every clock and one must find its entry within 1-2 clocks and resolve entry conflicts if any at the same time. This design, if succeeded, can be widely used for any dynamic Huffman encoding. I think there is no such hardware design so far in the world and why I found that almost all Huffman encoding applications are static as in FAX, electronic books, ZIP files, JPEG, MPEG and so on. Thank you. Weng Logan Shaw wrote: > Weng Tianxiang wrote: > > Andy, > > No. I want to know the hash function. > > > > For example, I have a Huffman encoding table as follows: > > > > name Fre encode > > ... > > Andy 15, "011", > > Ray 20, "110", > > ... > > > > Now a text string "Andy Ray wrote" is incoming, I want to know how to > > match the incoming "Andy" and the table entry "Andy". > > > > One must establish a 1-to-1 relationship between the incoming word and > > the table entry. > > > > What I can imagine to do in software is to create a hash function, then > > if there are more entries responding to one entry, further string match > > must be made. > > A trie might be an option. I once had a conversation with someone who > worked at a company that needed to match strings quickly in hardware > and I believe they might've used a trie for that, although I could be > remembering wrong since it was a few years ago. > > If you haven't encountered tries, think of them as deterministic > finite-state automata shaped like a tree, where the root is the > start state, and transitions (edges) from one node to the next > correspond to symbols on your input. > > For example, if you want to recognize the strings "car", "cat", > and "cup", your trie would look like this: > > > (start)--->( )-+--->( )----->(end) > c | u p > | > +--->( )-+--->(end) > a | r > | > +--->(end) > t > > Basically, you start at the start state, absorb a token and follow > the edge associated with it, and repeat until you either don't have > a matching edge or hit an end state. > > The nice thing about a trie is that you don't have to worry about > collisions or worst-case behavior. The time to match is a linear > function of the length of the string you're matching. > > A straightforward trie where every input symbol is a character (hence, > probably 8 bits or maybe even larger) might be a little wasteful of > space. But there are ways around that. The most obvious one is to > say that the input is a string of bits and each bit is an input > symbol. This makes your tree 8 times as deep as with 8-bit characters > as input symbols, but it has the advantage that each node can have > no more than 2 children (as compared to 256). A nice middle ground > might be to make your input symbols pairs of bits or sequences of > 4 bits. > > - Logan

Weng Tianxiang wrote: > Andy, > No. I want to know the hash function. > > For example, I have a Huffman encoding table as follows: > > name Fre encode > ... > Andy 15, "011", > Ray 20, "110", > ... > > Now a text string "Andy Ray wrote" is incoming, I want to know how to > match the incoming "Andy" and the table entry "Andy". > > One must establish a 1-to-1 relationship between the incoming word and > the table entry. > > What I can imagine to do in software is to create a hash function, then > if there are more entries responding to one entry, further string match > must be made. > > I am wondering about whether or not a hardware design can does it. > Ok - you're asking about decoding, not encoding. I don't think there's a hash function for huffman codes, though I'd love to be proven wrong. You can, however, decode huffman codes with a binary search tree encoded into a rom/ram. This works because no code is a prefix of another. You can alter the tree to take more than one input bit at a time at the expense of hardware. A fully parallel hardware solution needs CAM (with don't cares), but that's really expensive to implement. Cheers, Andy.

Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive

Compare FPGA features and resources

Threads starting:

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search