Page 6 of 7

Posted: Sun Apr 29, 2007 12:50 pm
by medway
Immanuel wrote:
medway wrote:bit crushing doesn't utilize dither so yes of course it will incur distortions.
I see you describing loss of resolution there.


As I've said before dither randomizes distortion present when reducing bit depths so that it becomes non correlated to the signal, in other words it becomes noise. The signal is still harmonically pure, but with a higher noise floor.

...

Alternatively if you have Wavelab you can set the master output to 8bit, notice the distortion, then turn on dither and hear the distortion disappear along with the resulting higher noise floor.
But that 8bit signal will not have the same resolution = the same level of details and pureness as the original 16 or 24 bit signal. Therefor the bit count is not only about noise level.
No I am not describing resolution there. Lack of precision leading to distortion yes. Bits are not resolution they are depth.

Resolution in context with computers is defined as:

"the degree of sharpness of a computer-generated image as measured by the number of dots per linear inch in a hard-copy printout or the number of pixels across and down on a display screen."

This has nothing to do with how bit depth works in defining an audio property.

Concerning the 8bit test: There is no loss of detail or pureness, as I stated the 8bit signal is harmonically pure, that means NO distortion. Even the Sony link talks about that, maybe you didn't read it so here it is in short

"dither turns a quantised numerical signal conduit into the equivalent of a naturally continuous (un-quantised) system, which exhibits a finite signal to noise ratio with no practical limit to harmonic signal resolution. In other words the inescapable presence of quantisation in numerical systems does not forcibly lead to 'discontinuity' or 'resolution loss' in the signal. Misunderstandings of this fact underpin many of the most damaging misconceptions surrounding digital audio systems. It can also be deduced from the above plots that any undithered digital representation of an audio signal is effectively illegal."

That means you have for all intents and purposes an analog signal with its accompanying noise floor.

You must get the attachment of "detail" and "resolution" our of your head when you think of bits. Bits determine the amplitude and therefor the dynamic range. Sample rate handles the frequency. That's it. You are attributing things to the function of bit depth that simply isn't there.

It just takes some reprogramming from all the myths and misinformation out there, especially contributed to by manufacturers to sell you the newest product.

Posted: Sun Apr 29, 2007 12:51 pm
by medway
hubird wrote:(OT) are we going to quote that lengthy list of numbers with every new post again in this thread :o
except a 'quote' knob there's also a simple but nice 'reply' knob... :-)
Yes you're right :) sorry

Posted: Sun Apr 29, 2007 1:59 pm
by husker
"resolution" and ultimately quality in the context of audio really relates to the bit-rate, which involves both sample rate and bit depth.

Tha't why 1 bit resolution can work...if you sample at 2.8GHz...

http://en.wikipedia.org/wiki/Direct_Stream_Digital

Posted: Sun Apr 29, 2007 2:43 pm
by hubird
thanks Medway for souplesse :-)

A nice trick to relate to the right person, if you expect someone else posting before yourself, is starting with @ name,, or quoting the relevant paragraph of course.
It makes reaging any thread a lot more pleasant, specially if a very long quote is replied with just 'indeed' or something :lol:
(The Elektron users forum is a sad example of the visual chaos you get when everybody always quotes everything).

on with the interesting thread now, Husker said something in between as it looks, at least theoretically :-)

Posted: Mon Apr 30, 2007 12:03 am
by tgstgs
just want to add
-------
dsd is another tech!
very good for the companies course of better copy protection!
adventeges or disadvanteges are disscused by pros. not clear whats best atm.
in short i would say better alising, better bandwidth - more noise, more discspace (have to add never did a ab test; thats just my simple summary of the experts statements!!!!!!!)
--------------
bitrate = bits per sample; so the bits are the deph (amplitude )
more bits = more space to the noise;
--------------
as for the math:
depends on the calculation you want to make + the source you have access to;
f.e. having a amplitude at 12000 dezimal want to add 1dB better stay simple and do in integer;
having a freq E2 at about 82.41Hz calculating 3 notes up (tempered) would need a little acrobat maths in integer to be as much prezise as in float;
--------------
for the channel /masterfader effect some recognized:
at the first look i would tend to saing doesnt matter;
at second look i would guess that each channel has a noise;
the more channels you take the more noise you add with each channel to the master;
so lowering the channels could indeed make a difference to lowering the master;
--------------
differezes to nativ
scopes said to process sample by sample; so its like HW;
in native you process blocks of samples; for todays cpus as fast as; but another conzept of programming!
imagine a Furier without a block of samples . .

---------------

good vibes from vienna

Posted: Mon Apr 30, 2007 1:18 am
by medway
husker wrote:"resolution" and ultimately quality in the context of audio really relates to the bit-rate, which involves both sample rate and bit depth.

Tha't why 1 bit resolution can work...if you sample at 2.8GHz...

http://en.wikipedia.org/wiki/Direct_Stream_Digital
It's still not "resolution". Just drop the term permanently as it does not describe digital audio. It's one of the biggest reasons people have problems understanding digital audio as it's a very misleading term. It fuels the whole "more bits is better" and "moving faders loses resolution" debates, amongst others...

Posted: Mon Apr 30, 2007 1:47 am
by medway
tgstgs wrote:just want to add

for the channel /masterfader effect some recognized:
at the first look i would tend to saing doesnt matter;
at second look i would guess that each channel has a noise;
the more channels you take the more noise you add with each channel to the master;
so lowering the channels could indeed make a difference to lowering the master;
--------------


good vibes from vienna
You do create distortion when moving the faders but the level is so incredibly small. Even with multiple faders added you're still looking at levels down below -120db.

Psychological factors are an immensely more influentual to what we think we hear than levels at -120db.

Posted: Mon Apr 30, 2007 2:22 am
by tgstgs
im talking about noise from a audiorecording
-------
if you add a noise to a sine the continous wave looks like going up down a little bit sample by sample ;
couldnt translate it better;

in numbers the sample values goes up and down to the original;
when processing sample by sample these adds and subs can theoreticaly modify the master more then modifiing the master himself when using a lot of channels;

its no effect when using only one channel of course!

but i agree that its minimal;

greetings

Posted: Mon Apr 30, 2007 4:52 am
by medway
tgstgs> Sorry can't completely follow here how the single channels would modifiy the master more than changing the master itself. But if we both agree the distortion is so far below audible range as not to be discernable then let's just leave it at that.

Thanks for trying to clarifiy.

Posted: Mon Apr 30, 2007 11:08 am
by Immanuel
Resolution in context with computers is defined as:

"the degree of sharpness of a computer-generated image as measured by the number of dots per linear inch in a hard-copy printout or the number of pixels across and down on a display screen."
"the number of pixels across and down on a display screen."
If you want to use the display analogy, you could argue, that going from 44.1kHz 16 bit to 44.1kHz 1 bit you go from a "display resolution" of 44100*65536 to a "display resolution" of 44100*2. Seeing it this way, it is indeed a loss of resolution.


It can also be deduced from the above plots that any undithered digital representation of an audio signal is effectively illegal."
Well, in my mind the dithered signal is "illegal" too then. Saying anything else is like saying, that dither cures 100%. But if dither cured 100%, there probably wouldn't be different dither shapes to choose from, as one would be enough. So unless you go for the possibility of "less illegal" (IMO an on/off word), then all audio signals will be illegal representations of the original.


You must get the attachment of "detail" and "resolution" our of your head when you think of bits. Bits determine the amplitude and therefor the dynamic range. Sample rate handles the frequency. That's it. You are attributing things to the function of bit depth that simply isn't there.
I did a test. I played a CD through a decimator set to 1 (one) bit. I expected to loose all information below -6dB ... meaning almost everything. I didn't. The instruments where still there. They still played in tune too. This really was a surprice to me. And I must say, that I am undecisive now. I do think that it is logically right to talk about loss of resolution (re my "display analogy" from before). But on the other hand. I didn't meet the expected loss of detail. Sticking to the visual analogy, you could say, that the picture was still taken on an iso 100 film ... but on the same frame x number of randomized pictures of chaos where taken (the added noise). It is still an iso 100 film. The level of detail is the same, but some details are now burried under so much noise, that it is lost to perception. Is this a loss of resolution? I guess, one could both answer yes and no - the choice ends up to be personal preference of use of words.

I learned something from taking this experiment. I learned, that a recording can reveal sounds out of the dynamic range of the bit depth. Still, I like the sound of the higher bit rates better. I can by now not explain why. It just sounds more natural, relaxed and easy to my ears. The 1 bit CD did not sound pretty. And even though the melody stayed the same, and the details where still there, the integrity of some sounds suffered more than others. The snare drum sounded very different. The legato singing was less affected. So to my ears, the higher bit rates sounds more true. It is however argueable if they are less detailed.

Posted: Mon Apr 30, 2007 1:17 pm
by medway
Emmanual>

First of all that wasn't an analogy, it was a deffinition of what resolution is.

It's a term to describe how images are composed, not audio.

More bits can be seen at as having more 'precision', but not 'resolution'.

The longer you hold onto the term 'resolution' the harder it will be to actually understand digital audio for what it is.

The reason why there are different dither shapes is to mask the noise used so it is less obtrusive by placing it in certain areas of the audio band that we are less sensitive too. It is not about being more effective or not. Dither works for all intents and purposes 100% and leaves the audio as if it was unquantized.

What is a decimator? Plugin? Try to find something that lets you reduce to 8 bits and add dither. You need the dither to make the test interesting.

You have to drop the visual analogies along with the term resolution. They go hand in hand. Neither parallels or descibes audio correctly.

I believe people get hung up on the visual analogy because it is much easier to get to grips with as its something we see in daily life without thinking much of it. Audio on the other hand is a less direct, "invisible" force.

So we rely on visual descriptions of what audio looks like in both the physical and digital realm. Couple that with the fact that what you see on your computer
(stair step waveforms) is not what comes out of the converters and it can be confusing to say the least to get your head around.

As far as higher bit rates being "truer", they just have lower noise. But even with a dithered 16bit output the noise is still generally far below what you would ever pick up in a typical listening environment (around -93db)

In fact ironicly its the analog gear that is the limitation here not digital. 24bit converters are not true 24 bit as generally their specs are around -110db at best. That equates to about 18-19bits.

For playback 16bit is more than adequate, especially considering the noise already evident in the analog gear used to record and then play back the audio.

Posted: Tue May 01, 2007 4:57 am
by valis
Seems to me that now all of the discussion has been in relation to what happens when a single signal is 'bit reduced' as compared to the original, especially when dither is mixed in after. The quantization error in this case is minimized and noisefloor is improved by adding noise. However the quantization error isn't just 'noise' it also has a masking effect.

So what happens now when these masking effects are multiplied throughout a signal chain that does multiple stages of processing, per channel, and then sums a large number of these channels? Certainly introducing dither as the 'last step' mitigates things to a certain degree but don't errors accumulate enough to cause certain masking effects to still be present?

Posted: Tue May 01, 2007 7:59 am
by medway
valis wrote:Seems to me that now all of the discussion has been in relation to what happens when a single signal is 'bit reduced' as compared to the original, especially when dither is mixed in after. The quantization error in this case is minimized and noisefloor is improved by adding noise. However the quantization error isn't just 'noise' it also has a masking effect.

So what happens now when these masking effects are multiplied throughout a signal chain that does multiple stages of processing, per channel, and then sums a large number of these channels? Certainly introducing dither as the 'last step' mitigates things to a certain degree but don't errors accumulate enough to cause certain masking effects to still be present?
The quantization error isnt minimized its eliminated. There's no masking effect, just more noise, which in of itself can be considered masking if you want to look at it that way.

Dither should be applied anytime there is a reduction in bit depth. So if a plugin inserted on a channel converts up to 48bit DP then back down to a lower target it should be dithered. This is where you stick to TPDF based dither which is neutral.

If the final mix is to then be brought down to say 16bits you can add a colored dither to help keep it out of the audible range. You can't use this type of dither in the mixing process as it can become noticable after further processing.

Now to your question yes you could have a build up of potential problems if the mixing environment is not correctly dithering where it should be. I think this was mainly a problem with Pro Tools, especially TDM that dropped to 24bits across certain paths.

Posted: Tue May 01, 2007 9:28 am
by hifiboom
in my opinion bits do represent "resolution"...

in digital world there is no such thing like "noise-ratio".... only in a context that the digital signal is being converted into an analog one via D/A converter.

if you have a 4 bit sample and you find it noisy, its due to the fact that these are quantization errors .

an thats something completly other: if you have no signal in a 4 bit sample ( "0") its still perfect silence.

its the same thing if you have made a picture with a digicam and try to represent it with less pixels....
the side effects could be called aliasing....(f.e. replace 5 pixels with 2)
the coulour white ("0") still stays a perfect white, and its not important if you have 1280x1024 or 128x103 pixels.....or one pixel.

Posted: Tue May 01, 2007 1:08 pm
by medway
hifiboom> I understand your post but I do not see how it defends the use of "resolution". I also don't understand the insistence to hold onto that term.

Posted: Tue May 01, 2007 3:37 pm
by garyb
he means the ability to perceive detail.

Posted: Wed May 02, 2007 12:14 am
by astroman
yes, but what Medway tries to explain is that 'detail' isn't defined by bitdeepth, as it's propagated by marketing.
An increased noise level will of course hide details, but far from the level it's (usually) discussed in this context.

As a funny coincidence I got some hands-on exercises on Medway's description - I don't even pretend to understand the math background :P
Recently couldn't resist an old Philips CD304 (a 10kg monster with cast iron mechanic) for 5 bucks, which is a so-called 14bit player.
Repaired the 'standard' cold solder joints (for which it is known) and oops - there it plays again :D
Compared it to my 'true' 16 bit CD650 and it's almost impossible to detect a difference in detail. The 'coloring' of the 650 may be a tiny bit 'sweeter', but then the players have fairly different output stages.

As the 304 is a classic, it's circuitry is covered well in literature... (excerpt) 16 signal bits plus 12 coefficient bits (from the filter) make a wordlength of 28 bits. Only the 14 most significant bits are output, reducing SNR (theoretically) for 12dB compared to a 16bit system. 6 dB are gained back by oversampling, another 8.4 dB by 'noise-shaping' (an error-feedback-loop) ... etc etc. (SAA7030 and TDA1540 for reference - and don't nail me on that math stuff...)

whatever they did - the audible result is convincing - and I'll never joke about '14 bit' CD players again... :oops:
since I cannot deal with that kind of math on calculus level, I simply see the samples as part of a (obscure...) transfer function that reconstructs the audio, but not like the 'classical' sine graph anymore...
Otherwise the (supposed to be) well known 'intersample-overshoots' simply wouldn't exist ;)

cheers, Tom

Posted: Wed May 02, 2007 3:35 am
by Immanuel
valis wrote:the quantization error isn't just 'noise' it also has a masking effect.
Yes, some details will still be there when doing bit reduction, but they will be masked by noise.

Posted: Wed May 02, 2007 3:39 am
by Immanuel
hifiboom wrote:in digital world there is no such thing like "noise-ratio".... only in a context that the digital signal is being converted into an analog one via D/A converter.

if you have a 4 bit sample and you find it noisy, its due to the fact that these are quantization errors .

an thats something completly other: if you have no signal in a 4 bit sample ( "0") its still perfect silence.
It might be quantization errors, but it sure sounds like (ugly) noise.

Posted: Wed May 02, 2007 7:56 am
by hifiboom
astro, especially about the CD players, its more important in that context how well the d/a is interpolating the angled digital waveform into a sinus based one....
a digital wavefile in that context is always angled, as every sample point does represent a period of 1second/44100 = 0,02 ms of time.

for some deeper informations
http://happybob.com/marc/digital_sucks/#
http://www.teamcombooks.com/mp3handbook/11.htm

digital waveforms are no continuous functions like analog ones....

if you lower the bit depth to very low values, you get noise, because the sine waves more or less end up in square waveforms...

thats a completly other phenomena than the noise in analog world....

if you have perfect silence in a 4-bit audiofile, you could amp it to endless volumes without getting noise, in analog world you can not.