Discussion:
[Freetel-codec2] voice banking + codec2
Imran
2016-04-19 19:24:56 UTC
Permalink
Hi All,

I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss. Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way. This
is especially common in the elderly, who lose their voices gradually,
then suddenly. Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone. So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.

My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems. Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past. For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.

So, I did some research into sound and synthesis -- and I discovered
codec2 by accident. It seems to meet the needs of being small enough to
understand, and perhaps generate frames from. I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.

The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way. For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field. This changed the voice to sound higher, without introducing
distortion. Single bit changes to the other LSPs rarely did anything
other than insert noise.

The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality. I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect. Wo was a different
matter. I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly. Too many 1-->0 bit flips, and the sound would be
inaudible. It became, "volume". I had expected this to be some sort of
frequency, but it did not work out that way. Removing the voicing bits
had significant effect, but not that I understood. The voice sounded
somewhat like a whisper, but not quite.

I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.

I don't understand the fields of the codec frame well enough. I see the
docs, and know it's 51 bits. I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.

Questions:
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?

Thanks,
Imran
David Rowe
2016-04-19 21:05:02 UTC
Permalink
Hello Imran,

The LSPs capture the short term spectrum of the speech signal, a
graphical explanation here:

http://www.rowetel.com/blog/?p=2255

However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}. c2sim is a useful tool for that.

I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.

It would also be a good idea to use an up to date version of the codec
source.

Cheers,

David
Post by Imran
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss. Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way. This
is especially common in the elderly, who lose their voices gradually,
then suddenly. Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone. So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems. Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past. For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident. It seems to meet the needs of being small enough to
understand, and perhaps generate frames from. I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way. For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field. This changed the voice to sound higher, without introducing
distortion. Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality. I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect. Wo was a different
matter. I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly. Too many 1-->0 bit flips, and the sound would be
inaudible. It became, "volume". I had expected this to be some sort of
frequency, but it did not work out that way. Removing the voicing bits
had significant effect, but not that I understood. The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough. I see the
docs, and know it's 51 bits. I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
Bruce Perens
2016-04-20 03:20:49 UTC
Permalink
Imran,

That's a really interesting project. David Rowe has a Ph.D. in voice coding
and will probably have more to say. Codec2 is interesting for some aspects
of your project and maybe not a perfect fit. This might help you get
started in understanding the codec:

27 Jan 2012, Codec 2 talk at linux.conf.au 2012 (voted best talk of
conference!) Video and Slides. This talk has a really easy to understand
graphical description of Codec 2, a discussion on patent free codecs, and
the strong links between Ham Radio and the Open Source movement. More on
lca.conf.au 2012 in this blog post.


My feeling is that Codec2 as presently coded removes a lot of the
personality of the speaker. It's meant to convey "communications quality"
voice, with all of the meaning but without some of the information that we
use instinctively to identify the speaker.

However, understanding how the vocal tract model of Codec2 works might
indeed give someone hints on how to produce a model-based, rather than
sample-based, system to reproduce lost speech.

Thanks

Bruce
Post by David Rowe
Hello Imran,
The LSPs capture the short term spectrum of the speech signal, a
http://www.rowetel.com/blog/?p=2255
However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}. c2sim is a useful tool for that.
I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.
It would also be a good idea to use an up to date version of the codec
source.
Cheers,
David
Post by Imran
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss. Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way. This
is especially common in the elderly, who lose their voices gradually,
then suddenly. Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone. So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems. Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past. For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident. It seems to meet the needs of being small enough to
understand, and perhaps generate frames from. I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way. For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field. This changed the voice to sound higher, without introducing
distortion. Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality. I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect. Wo was a different
matter. I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly. Too many 1-->0 bit flips, and the sound would be
inaudible. It became, "volume". I had expected this to be some sort of
frequency, but it did not work out that way. Removing the voicing bits
had significant effect, but not that I understood. The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough. I see the
docs, and know it's 51 bits. I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Post by Imran
Find and fix application performance issues faster with Applications
Manager
Post by Imran
Applications Manager provides deep performance insights into multiple
tiers of
Post by Imran
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
Bruce Perens
2016-04-20 03:30:20 UTC
Permalink
Imran,

Please look at festvox.org if you haven't done so. They provide a speech
synthesizer for which it is possible to record a human diphone database. I
don't know if this is smaller than the sample-based model you're attempting
to replace. A long time ago, I did meet a researcher at CMU who succeeded
in contributing a model of his own voice.

Thanks

Bruce
Post by David Rowe
Imran,
That's a really interesting project. David Rowe has a Ph.D. in voice
coding and will probably have more to say. Codec2 is interesting for some
aspects of your project and maybe not a perfect fit. This might help you
27 Jan 2012, Codec 2 talk at linux.conf.au 2012 (voted best talk of
conference!) Video and Slides. This talk has a really easy to understand
graphical description of Codec 2, a discussion on patent free codecs, and
the strong links between Ham Radio and the Open Source movement. More on
lca.conf.au 2012 in this blog post.
My feeling is that Codec2 as presently coded removes a lot of the
personality of the speaker. It's meant to convey "communications quality"
voice, with all of the meaning but without some of the information that we
use instinctively to identify the speaker.
However, understanding how the vocal tract model of Codec2 works might
indeed give someone hints on how to produce a model-based, rather than
sample-based, system to reproduce lost speech.
Thanks
Bruce
Post by David Rowe
Hello Imran,
The LSPs capture the short term spectrum of the speech signal, a
http://www.rowetel.com/blog/?p=2255
However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}. c2sim is a useful tool for that.
I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.
It would also be a good idea to use an up to date version of the codec
source.
Cheers,
David
Post by Imran
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss. Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way. This
is especially common in the elderly, who lose their voices gradually,
then suddenly. Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone. So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems. Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past. For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident. It seems to meet the needs of being small enough to
understand, and perhaps generate frames from. I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way. For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field. This changed the voice to sound higher, without introducing
distortion. Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality. I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect. Wo was a different
matter. I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly. Too many 1-->0 bit flips, and the sound would be
inaudible. It became, "volume". I had expected this to be some sort of
frequency, but it did not work out that way. Removing the voicing bits
had significant effect, but not that I understood. The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough. I see the
docs, and know it's 51 bits. I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Post by Imran
Find and fix application performance issues faster with Applications
Manager
Post by Imran
Applications Manager provides deep performance insights into multiple
tiers of
Post by Imran
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
Alan Beard
2016-04-20 06:37:07 UTC
Permalink
Hi all,
In actual use, FreeDV, we recognize the speaker in just a few words.
Perhaps, since I've been doing Ham Radio for 30+ years, I've got good at
person recognition in poor signal circumstances.

Alan VK2ZIW
Post by David Rowe
Imran,
27 Jan 2012, Codec 2 talk at linux.conf.au 2012 (voted best talk of conference!) Video and Slides. This talk has a really easy to understand graphical description of Codec 2, a discussion on patent free codecs, and the strong links between Ham Radio and the Open Source movement. More on lca.conf.au 2012 in this blog post.
My feeling is that Codec2 as presently coded removes a lot of the personality of the speaker. It's meant to convey "communications quality" voice, with all of the meaning but without some of the information that we use instinctively to identify the speaker.
However, understanding how the vocal tract model of Codec2 works might indeed give someone hints on how to produce a model-based, rather than sample-based, system to reproduce lost speech.
    Thanks
    Bruce
Hello Imran,
The LSPs capture the short term spectrum of the speech signal, a
   http://www.rowetel.com/blog/?p=2255
However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}.  c2sim is a useful tool for that.
I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.
It would also be a good idea to use an up to date version of the codec
source.
Cheers,
David
Post by Imran
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss.  Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way.  This
is especially common in the elderly, who lose their voices gradually,
then suddenly.  Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone.  So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems.  Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past.  For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident.  It seems to meet the needs of being small enough to
understand, and perhaps generate frames from.  I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way.  For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field.  This changed the voice to sound higher, without introducing
distortion.  Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality.  I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect.  Wo was a different
matter.  I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly.  Too many 1-->0 bit flips, and the sound would be
inaudible.  It became, "volume".  I had expected this to be some sort of
frequency, but it did not work out that way.  Removing the voicing bits
had significant effect, but not that I understood.  The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough.  I see the
docs, and know it's 51 bits.  I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
Alan

Evil flourishes when good men do nothing.
Consider Jesus.
---------------------------------------------------------------------------
Alan Beard               Unix Support Technician from 1984 to today
70 Wedmore Rd.           Sun Solaris, AIX, HP/UX, Linux, SCO, MIPS
Emu Heights N.S.W. 2750  Routers, terminal servers, printers, terminals etc..
+61 2 47353013 (h)       Support Programming, shell scripting, "C", assembler
0414 353013 (mobile)     After uni, electronics tech
Imran
2016-04-20 21:36:09 UTC
Permalink
Hi All,

Thanks for all the pointers. I'll take a deeper look at those docs and
see if I can figure out more about the LSPs from there. The idea of
using the model directly is really appealing. Is there a newer .NET
port that includes it?

I'm avoiding the festival method of pre-recorded diphones specifically
because of the "if a person has already lost their voice, recording is
impossible" problem -- getting pre-recorded samples of the specific text
needed for festival from an old VCR or answering machine tape is highly
unlikely. A human can hear just a few words of speech, like, "It's easy
to tell the depth of a well", and imagine that same voice saying
anything else, for example -- "The cat sat on a black hat". I want to
see if it's possible to give a computer that same ability.

In some way, I've begun wondering if maybe this is a search problem? 51
bits yields ~2.23 quadrillion possible frames. However, treating it as
a search problem will probably only work if the codec frames don't carry
over information between frames. Are codec2 frames independent of each
other? Can changing the order of frames alter anything except the order
of generated sounds from the decoder?

Thanks,
Imran
--
Imran
***@fastmail.com

On Tue, Apr 19, 2016, at 11:37 PM,
Send Freetel-codec2 mailing list submissions to
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
or, via email, send a message with subject or body 'help' to
You can reach the person managing the list at
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Freetel-codec2 digest..."
1. Re: voice banking + codec2 (Bruce Perens)
2. Re: voice banking + codec2, actual use (Alan Beard)
----------------------------------------------------------------------
Message: 1
Date: Tue, 19 Apr 2016 20:30:20 -0700
Subject: Re: [Freetel-codec2] voice banking + codec2
Content-Type: text/plain; charset="utf-8"
Imran,
Please look at festvox.org if you haven't done so. They provide a speech
synthesizer for which it is possible to record a human diphone database.
I
don't know if this is smaller than the sample-based model you're
attempting
to replace. A long time ago, I did meet a researcher at CMU who succeeded
in contributing a model of his own voice.
Thanks
Bruce
Post by David Rowe
Imran,
That's a really interesting project. David Rowe has a Ph.D. in voice
coding and will probably have more to say. Codec2 is interesting for some
aspects of your project and maybe not a perfect fit. This might help you
27 Jan 2012, Codec 2 talk at linux.conf.au 2012 (voted best talk of
conference!) Video and Slides. This talk has a really easy to understand
graphical description of Codec 2, a discussion on patent free codecs, and
the strong links between Ham Radio and the Open Source movement. More on
lca.conf.au 2012 in this blog post.
My feeling is that Codec2 as presently coded removes a lot of the
personality of the speaker. It's meant to convey "communications quality"
voice, with all of the meaning but without some of the information that we
use instinctively to identify the speaker.
However, understanding how the vocal tract model of Codec2 works might
indeed give someone hints on how to produce a model-based, rather than
sample-based, system to reproduce lost speech.
Thanks
Bruce
Post by David Rowe
Hello Imran,
The LSPs capture the short term spectrum of the speech signal, a
http://www.rowetel.com/blog/?p=2255
However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}. c2sim is a useful tool for that.
I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.
It would also be a good idea to use an up to date version of the codec
source.
Cheers,
David
Post by Imran
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss. Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way. This
is especially common in the elderly, who lose their voices gradually,
then suddenly. Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone. So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems. Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past. For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident. It seems to meet the needs of being small enough to
understand, and perhaps generate frames from. I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way. For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field. This changed the voice to sound higher, without introducing
distortion. Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality. I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect. Wo was a different
matter. I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly. Too many 1-->0 bit flips, and the sound would be
inaudible. It became, "volume". I had expected this to be some sort of
frequency, but it did not work out that way. Removing the voicing bits
had significant effect, but not that I understood. The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough. I see the
docs, and know it's 51 bits. I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Post by Imran
Find and fix application performance issues faster with Applications
Manager
Post by Imran
Applications Manager provides deep performance insights into multiple
tiers of
Post by Imran
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications
Manager
Applications Manager provides deep performance insights into multiple
tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
Message: 2
Date: Wed, 20 Apr 2016 16:37:07 +1000
Subject: Re: [Freetel-codec2] voice banking + codec2, actual use
Content-Type: text/plain; charset="utf-8"
Hi all,
In actual use, FreeDV, we recognize the speaker in just a few words.
Perhaps, since I've been doing Ham Radio for 30+ years, I've got good at
person recognition in poor signal circumstances.
Alan VK2ZIW
Post by David Rowe
Imran,
27 Jan 2012, Codec 2 talk at linux.conf.au 2012 (voted best talk of conference!) Video and Slides. This talk has a really easy to understand graphical description of Codec 2, a discussion on patent free codecs, and the strong links between Ham Radio and the Open Source movement. More on lca.conf.au 2012 in this blog post.
My feeling is that Codec2 as presently coded removes a lot of the personality of the speaker. It's meant to convey "communications quality" voice, with all of the meaning but without some of the information that we use instinctively to identify the speaker.
However, understanding how the vocal tract model of Codec2 works might indeed give someone hints on how to produce a model-based, rather than sample-based, system to reproduce lost speech.
? ? Thanks
? ? Bruce
Hello Imran,
The LSPs capture the short term spectrum of the speech signal, a
? ?http://www.rowetel.com/blog/?p=2255
However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}.? c2sim is a useful tool for that.
I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.
It would also be a good idea to use an up to date version of the codec
source.
Cheers,
David
Post by David Rowe
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss.? Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way.? This
is especially common in the elderly, who lose their voices gradually,
then suddenly.? Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone.? So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems.? Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past.? For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident.? It seems to meet the needs of being small enough to
understand, and perhaps generate frames from.? I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way.? For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field.? This changed the voice to sound higher, without introducing
distortion.? Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality.? I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect.? Wo was a different
matter.? I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly.? Too many 1-->0 bit flips, and the sound would be
inaudible.? It became, "volume".? I had expected this to be some sort of
frequency, but it did not work out that way.? Removing the voicing bits
had significant effect, but not that I understood.? The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough.? I see the
docs, and know it's 51 bits.? I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
Alan
Evil flourishes when good men do nothing.
Consider Jesus.
---------------------------------------------------------------------------
Alan Beard ? ? ? ? ? ? ? Unix Support Technician from 1984 to today
70 Wedmore Rd. ? ? ? ? ? Sun Solaris, AIX, HP/UX, Linux, SCO, MIPS
Emu Heights N.S.W. 2750 ?Routers, terminal servers, printers, terminals
etc..
+61 2 47353013 (h) ? ? ? Support Programming, shell scripting, "C",
assembler
0414 353013 (mobile) ? ? After uni, electronics tech
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications
Manager
Applications Manager provides deep performance insights into multiple
tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
------------------------------
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
End of Freetel-codec2 Digest, Vol 72, Issue 4
*********************************************
Imran
2016-04-21 00:49:04 UTC
Permalink
I've looked more at the "is this a search" idea, and generated color
maps of me saying "ah" and "sh".

The patterns are striking -- and my gut feel tells me that a sparse map
approach may work to create a recognizer. If I can create a recognizer
for a voice, then I can use directed evolution from base phones/diphones
to find a voice. In other words, I could do "voice mining" of the
"sound space" that the codec can encode.

If you want to see the color maps I generated, you can see them on my
blog post here:
http://ipeerbhai.wordpress.com/2016/04/21/codec2-sparse-map/

There may be some errors in the frame representation -- Lots of BitArray
--> Byte --> BitArray casting, and I think the voicing bits are getting
lost in this current implementation, as I expect the last two bits of
each frame to be red in the "Ah" picture.

What really surprises me is how consistent the "ah" frames are. Very
little variation in the frames.
--
Imran
***@fastmail.com

On Wed, Apr 20, 2016, at 02:36 PM,
Send Freetel-codec2 mailing list submissions to
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
or, via email, send a message with subject or body 'help' to
You can reach the person managing the list at
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Freetel-codec2 digest..."
1. Re: GPL Golay (23,12) (Tomas H?rdin)
2. Re: GPL Golay (23,12) (Steve)
3. Re: voice banking + codec2 (Imran)
----------------------------------------------------------------------
Message: 1
Date: Wed, 20 Apr 2016 10:19:44 +0200
Subject: Re: [Freetel-codec2] GPL Golay (23,12)
Content-Type: text/plain; charset="windows-1252"
I felt inspired and took a shot at implementing an encoder/decoder
based on available information. See attached golay.c, which steps
through each possible message and correctable bit flips. It also
checks that each possible codeword has been generated. I hereby
place this implementation in the public domain so no one else has
to get irritated due to lack of usable implementations.
As "Public Domain" by nature is not an actual license, it can be
problematic. I would recommend MIT as one of the least restrictive
licenses available.
Sure, anything is fine really. Was mostly thinking for anyone stumbling
on this via a search engine and in need of some snippets
/Tomas
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
Message: 2
Date: Wed, 20 Apr 2016 09:27:56 -0500
Subject: Re: [Freetel-codec2] GPL Golay (23,12)
Content-Type: text/plain; charset="utf-8"
I kind of agree that Public Domain kind of scares people these days.
Just my 2-cents but LGPL 2.1 like the rest of the library code is
probably
the best. It has the same cash value as Public Domain, but doesn't scare
companies from putting it into their repositories and using it.
Steve
P.S. Thanks for the code, I'm going to try it out.
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
Message: 3
Date: Wed, 20 Apr 2016 14:36:09 -0700
Subject: Re: [Freetel-codec2] voice banking + codec2
Content-Type: text/plain
Hi All,
Thanks for all the pointers. I'll take a deeper look at those docs and
see if I can figure out more about the LSPs from there. The idea of
using the model directly is really appealing. Is there a newer .NET
port that includes it?
I'm avoiding the festival method of pre-recorded diphones specifically
because of the "if a person has already lost their voice, recording is
impossible" problem -- getting pre-recorded samples of the specific text
needed for festival from an old VCR or answering machine tape is highly
unlikely. A human can hear just a few words of speech, like, "It's easy
to tell the depth of a well", and imagine that same voice saying
anything else, for example -- "The cat sat on a black hat". I want to
see if it's possible to give a computer that same ability.
In some way, I've begun wondering if maybe this is a search problem? 51
bits yields ~2.23 quadrillion possible frames. However, treating it as
a search problem will probably only work if the codec frames don't carry
over information between frames. Are codec2 frames independent of each
other? Can changing the order of frames alter anything except the order
of generated sounds from the decoder?
Thanks,
Imran
--
Imran
On Tue, Apr 19, 2016, at 11:37 PM,
Send Freetel-codec2 mailing list submissions to
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
or, via email, send a message with subject or body 'help' to
You can reach the person managing the list at
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Freetel-codec2 digest..."
1. Re: voice banking + codec2 (Bruce Perens)
2. Re: voice banking + codec2, actual use (Alan Beard)
----------------------------------------------------------------------
Message: 1
Date: Tue, 19 Apr 2016 20:30:20 -0700
Subject: Re: [Freetel-codec2] voice banking + codec2
Content-Type: text/plain; charset="utf-8"
Imran,
Please look at festvox.org if you haven't done so. They provide a speech
synthesizer for which it is possible to record a human diphone database.
I
don't know if this is smaller than the sample-based model you're
attempting
to replace. A long time ago, I did meet a researcher at CMU who succeeded
in contributing a model of his own voice.
Thanks
Bruce
Post by David Rowe
Imran,
That's a really interesting project. David Rowe has a Ph.D. in voice
coding and will probably have more to say. Codec2 is interesting for some
aspects of your project and maybe not a perfect fit. This might help you
27 Jan 2012, Codec 2 talk at linux.conf.au 2012 (voted best talk of
conference!) Video and Slides. This talk has a really easy to understand
graphical description of Codec 2, a discussion on patent free codecs, and
the strong links between Ham Radio and the Open Source movement. More on
lca.conf.au 2012 in this blog post.
My feeling is that Codec2 as presently coded removes a lot of the
personality of the speaker. It's meant to convey "communications quality"
voice, with all of the meaning but without some of the information that we
use instinctively to identify the speaker.
However, understanding how the vocal tract model of Codec2 works might
indeed give someone hints on how to produce a model-based, rather than
sample-based, system to reproduce lost speech.
Thanks
Bruce
Post by David Rowe
Hello Imran,
The LSPs capture the short term spectrum of the speech signal, a
http://www.rowetel.com/blog/?p=2255
However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}. c2sim is a useful tool for that.
I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.
It would also be a good idea to use an up to date version of the codec
source.
Cheers,
David
Post by Imran
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss. Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way. This
is especially common in the elderly, who lose their voices gradually,
then suddenly. Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone. So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems. Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past. For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident. It seems to meet the needs of being small enough to
understand, and perhaps generate frames from. I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way. For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field. This changed the voice to sound higher, without introducing
distortion. Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality. I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect. Wo was a different
matter. I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly. Too many 1-->0 bit flips, and the sound would be
inaudible. It became, "volume". I had expected this to be some sort of
frequency, but it did not work out that way. Removing the voicing bits
had significant effect, but not that I understood. The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough. I see the
docs, and know it's 51 bits. I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Post by Imran
Find and fix application performance issues faster with Applications
Manager
Post by Imran
Applications Manager provides deep performance insights into multiple
tiers of
Post by Imran
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications
Manager
Applications Manager provides deep performance insights into multiple
tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
Message: 2
Date: Wed, 20 Apr 2016 16:37:07 +1000
Subject: Re: [Freetel-codec2] voice banking + codec2, actual use
Content-Type: text/plain; charset="utf-8"
Hi all,
In actual use, FreeDV, we recognize the speaker in just a few words.
Perhaps, since I've been doing Ham Radio for 30+ years, I've got good at
person recognition in poor signal circumstances.
Alan VK2ZIW
Post by David Rowe
Imran,
27 Jan 2012, Codec 2 talk at linux.conf.au 2012 (voted best talk of conference!) Video and Slides. This talk has a really easy to understand graphical description of Codec 2, a discussion on patent free codecs, and the strong links between Ham Radio and the Open Source movement. More on lca.conf.au 2012 in this blog post.
My feeling is that Codec2 as presently coded removes a lot of the personality of the speaker. It's meant to convey "communications quality" voice, with all of the meaning but without some of the information that we use instinctively to identify the speaker.
However, understanding how the vocal tract model of Codec2 works might indeed give someone hints on how to produce a model-based, rather than sample-based, system to reproduce lost speech.
? ? Thanks
? ? Bruce
Hello Imran,
The LSPs capture the short term spectrum of the speech signal, a
? ?http://www.rowetel.com/blog/?p=2255
However rather than LSPs you might be better off using the model
parameters directly, i.e {Am}.? c2sim is a useful tool for that.
I also have lots of videos on how Codec 2 works, e.g. from LCA 2012.
It would also be a good idea to use an up to date version of the codec
source.
Cheers,
David
Post by David Rowe
Hi All,
I'm doing a hobby project to try building a voice banking system for
people who are losing their voices due to diseases like ALS or
age-related voice loss.? Existing voice banks require extensive voice
samples for a person, and sadly, many people do not have the ability to
record enough samples to build a voice bank the traditional way.? This
is especially common in the elderly, who lose their voices gradually,
then suddenly.? Traditional voice banking requires a month of reading
text with the hope of finding every phonetic transition along with every
intonation of every phone.? So, while there are only 44 english phones,
there are thousands of variations of the phone and transitions between
phones needed for voice banking.
My hope is to be able to synthesis a "perfect" version of a person's
voice with very few parameters for text to speech systems.? Enough that
I might be able to figure out the parameters from a short audio clip a
person may have stored in the past.? For example, I might take a base
phone, like "aaaa", and modify it to match a person's voice by altering
very few parameters.
So, I did some research into sound and synthesis -- and I discovered
codec2 by accident.? It seems to meet the needs of being small enough to
understand, and perhaps generate frames from.? I'm a C# programmer, so
I'm using a .NET port done by Mikhail Nasyrov from the 2010 codec2
codebase, and built a "sound munger" to try and understand the synthesis
process better.
The munger takes pre-recorded frames of me saying "aaaaaa", and randomly
distorts them in some way.? For example, one distortion that worked
surprisingly well was bitshifting the first 8 bits of the 36-bit LSPs
field.? This changed the voice to sound higher, without introducing
distortion.? Single bit changes to the other LSPs rarely did anything
other than insert noise.
The munger showed that energy didn't seem to contribute much, if
anything, to the sound quality.? I could set any/all of the bits to 0 or
1 for that field, and it seemed to have no effect.? Wo was a different
matter.? I could set all bits to 1 of the Wo and the original voice
would be preserved, but changing any 1 to 0 lowered the volume
significantly.? Too many 1-->0 bit flips, and the sound would be
inaudible.? It became, "volume".? I had expected this to be some sort of
frequency, but it did not work out that way.? Removing the voicing bits
had significant effect, but not that I understood.? The voice sounded
somewhat like a whisper, but not quite.
I've learned enough from this munger to possibly create "autotune" --
which is a step in the right direction and may help some people who have
distorted voices -- but the grail of voice synthesis from few parameters
escapes me.
I don't understand the fields of the codec frame well enough.? I see the
docs, and know it's 51 bits.? I can translate the frames to a C#
structure and bit-bang the bytes, but I can't find a predictable
pattern.
1. How do I better understand what the LSPs, energy, Wo, etc fields are
doing?
2. Does anyone out there have any thoughts on how I can achieve the goal
of few-parameter voice construction?
Thanks,
Imran
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications Manager
Applications Manager provides deep performance insights into multiple tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
Alan
Evil flourishes when good men do nothing.
Consider Jesus.
---------------------------------------------------------------------------
Alan Beard ? ? ? ? ? ? ? Unix Support Technician from 1984 to today
70 Wedmore Rd. ? ? ? ? ? Sun Solaris, AIX, HP/UX, Linux, SCO, MIPS
Emu Heights N.S.W. 2750 ?Routers, terminal servers, printers, terminals
etc..
+61 2 47353013 (h) ? ? ? Support Programming, shell scripting, "C",
assembler
0414 353013 (mobile) ? ? After uni, electronics tech
-------------- next part --------------
An HTML attachment was scrubbed...
------------------------------
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications
Manager
Applications Manager provides deep performance insights into multiple
tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
------------------------------
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
End of Freetel-codec2 Digest, Vol 72, Issue 4
*********************************************
------------------------------
------------------------------------------------------------------------------
Find and fix application performance issues faster with Applications
Manager
Applications Manager provides deep performance insights into multiple
tiers of
your business applications. It resolves application problems quickly and
reduces your MTTR. Get your free trial!
https://ad.doubleclick.net/ddm/clk/302982198;130105516;z
------------------------------
_______________________________________________
Freetel-codec2 mailing list
https://lists.sourceforge.net/lists/listinfo/freetel-codec2
End of Freetel-codec2 Digest, Vol 72, Issue 5
*********************************************
Continue reading on narkive:
Search results for '[Freetel-codec2] voice banking + codec2' (Questions and Answers)
5
replies
Whats the pros and cons of the Playstation 3 vs. xbox 360??
started 2008-01-16 19:45:08 UTC
video & online games
Loading...