acoustic mirror

Compare notes on how to get the most from Scope devices, etc.

Moderators: valis, garyb

maket
Posts: 314
Joined: Mon Jul 09, 2001 4:00 pm
Location: Ukraine
Contact:

Post by maket »

The one device i wish to exist:Acoustic mirror for Pulsar

<font size=-1>[ This Message was edited by: maket on 2002-01-31 22:36 ]</font>
junklight
Posts: 101
Joined: Thu Nov 08, 2001 4:00 pm
Contact:

Post by junklight »

I was thinking this very thought the other day - it should work great in DSP. And it would add an extra option for high quality reverb on the platform.

In addition convolving (for that is what Acoustic mirror does) can be used to make some wierd as f*ck sounds - I once made an excellent tinkerbell sound for a theatre production by convolving someone speaking with the sounds of little bells.

Any of the developers out there fancy giving this a shot?
__________________________________________
junklight - dark experimental electronics
http://www.junklight.com
User avatar
wayne
Posts: 2375
Joined: Sun Dec 23, 2001 4:00 pm
Location: Australia

Post by wayne »

what is this convolution, exactly? (excuse the ignorance)
junklight
Posts: 101
Joined: Thu Nov 08, 2001 4:00 pm
Contact:

Post by junklight »

The technical explanation is mulitplying two signals together in the frequency domain.

Bet you understand it all now *grin*

Convolution is used in DSP to do all sorts of things - the application we are talking about here allows you to "sample" reverbs. So for example you can record what is called an impulse from a location or piece of equipment - say a great sounding church or recording studio or concert hall and apply that reverb to your signal. The reverb sounds excellent - pretty much as if you had recorded in the place that was sampled. The downside of course is that you have no control over the parameters of the reverb.

Sony (amongst others) make a "sampling" reverb that uses this technique:

http://www.sospubs.co.uk/sos/dec99/arti ... res777.htm

SoundForge comes with a plugin called AcousticMirror that also does this. Damm useful tool.

mark
__________________________________________
junklight - dark experimental electronics
http://www.junklight.com
User avatar
Neutron
Posts: 2274
Joined: Sun Apr 29, 2001 4:00 pm
Location: Great white north eh
Contact:

Post by Neutron »

judging by the size of that sony unit you would need a lot more dsp than whats on a pulsar.

any articles on how it is actually done? what does "multiplying in the frequency domain" mean? i would like to see a diagram.
subhuman
Posts: 2573
Joined: Thu Mar 29, 2001 4:00 pm
Location: Galaxy Inside

Post by subhuman »

yes, after talking with my buddy about this subject, it seems it would take an enormous amount of system resources, and working with convolution in realtime is EVEN MORE intensive. If a plugin like this was to come, it would be DSP intensive, and expensive - and most pulsar users would probably complain at the price :wink:
User avatar
kensuguro
Posts: 4434
Joined: Sun Jul 08, 2001 4:00 pm
Location: BPM 60 to somewhere around 150
Contact:

Post by kensuguro »

To my understanding, I thought convolution was like:
you get an impulse file, which is probably a series of pairs (sample value, time) and would result in something similar to what the Masterverb has. (like room1, philharmonic, etc.)
Then, you would play the wav file as specified by the impulse data, which would result in something like a reverb.

But if you use something other than a proper impulse file, you'd get something more like an effect.
Here's the cooledit help description, which is more accurate:
"Convolution is the effect of multiplying every sample in one wave or impulse by the samples that are contained within another waveform. In a sense, this feature uses one waveform to "model" the sound of another waveform. The result can be that of filtering, echoing, phase shifting, or any combination of these effects. That is, any filtered version of a waveform can be echoed at any delay, any number of times. For example, "convolving" someone saying "Hey" with a drum track (short full spectrum sounds such as snares work best) will result in the drums saying “Hey
junklight
Posts: 101
Joined: Thu Nov 08, 2001 4:00 pm
Contact:

Post by junklight »

The amount of DSP needed is NOT that bad - running acoustic mirror in real time on my pIII600 laptop uses maybe 20% of CPU.

The DSP process is very simple fft->signal, fft->impulse, multiply them together and add the result to the outstream. I have no idea what the sony box is full of but I guess its basically a computer..

mark
__________________________________________
junklight - dark experimental electronics
http://www.junklight.com
User avatar
kensuguro
Posts: 4434
Joined: Sun Jul 08, 2001 4:00 pm
Location: BPM 60 to somewhere around 150
Contact:

Post by kensuguro »

It'll be cool if there was a Pulsar version of it... seems possible to me, if masterverb is doing something similar. Just add in the fft part.. err, or is fft possible on the pulsar?
junklight
Posts: 101
Joined: Thu Nov 08, 2001 4:00 pm
Contact:

Post by junklight »

FFT=Fast Fourier Transform

This is how a lot of DSP thingys work - do an FFT do some processing do an inverse FFT. I doubt that Masterverb is doing this - the traditional way to build a reverb is with allpass filters etc. This does give you a lot of control - if you are really keen to know what goes on inside a reverb check out Freeverb (an excellent VST/DX plugin that is totally open source and free).

The DSP part of this is NOT the hard part - the hard part is getting good impulses for use with the reverb - an expensive process as all location recording is - especially since the quality of the recordings is VITAL for a good reverb sound. perhaps they could be licensed of sonicfoundry or Alitverb who make one for the Mac

mark
__________________________________________
junklight - dark experimental electronics
http://www.junklight.com
User avatar
kensuguro
Posts: 4434
Joined: Sun Jul 08, 2001 4:00 pm
Location: BPM 60 to somewhere around 150
Contact:

Post by kensuguro »

Just in case you didn't know, I also come from a heavy DSP background as well so you needn't go into the details. (I studied FFT and phase vocoding in college) Thanx for the freeverb tip tho, I'll check it out sometime.

I was looking into how one would actually go about implementing it.. looked in some of my old csound tutorials and here's what I found:
http://www.duke.edu/~scott1/Studio/CSND ... ution.html
It seems that this is rather a sample to sample type of calculation, something Pulsar might not be capable of doing. (just as there aren't any granular devices) Oh well, hopefully someone finds a way someday. Either way, us normal pulsarians can only hope this will happen.

<font size=-1>[ This Message was edited by: kensuguro on 2002-02-06 14:20 ]</font>
junklight
Posts: 101
Joined: Thu Nov 08, 2001 4:00 pm
Contact:

Post by junklight »

Sorry about preaching to the converted :smile:..

Don't see why the pulsar shouldn't be able to do it - after all it can play samples and it can process them. The way to make it quick is to pre ftt the impulse - I reckon acoustic mirror does this.

Surely the reason that pulsar doesn't have these kind of things is that creamware are aiming for a more main stream audience - with the retro analog and DAW stuff. Experimenters are probably a very small target audience - most people would see a granulator for example as a nice little toy -but not a reason to buy. These things would only come if creamware opened developement up to all and sundry.

mark
__________________________________________
junklight - dark experimental electronics
http://www.junklight.com
eliam
Posts: 1093
Joined: Sat Jan 05, 2002 4:00 pm
Location: Montreal, Quebec
Contact:

Post by eliam »

That's probably true, but the possibility to increase the quality and diversity of the reverbs(and other effects) is quite a mainstream issue... And in fact, I think that to offer such a tool would make the platform more appealing to the studio owners/developpers as well as music "professionals".
There may be many important issues other than that, but the reverb is the reverb... the most widely used of all effects... Now, if you can do a lot more with the plug-in, that's neat, but only for the rev it would sell, even if it's a bit expensive...

Great thread, btw, very interesting!
User avatar
kensuguro
Posts: 4434
Joined: Sun Jul 08, 2001 4:00 pm
Location: BPM 60 to somewhere around 150
Contact:

Post by kensuguro »

Sorry about preaching to the converted :smile:..
Great to see another academian on board! cheers.
Don't see why the pulsar shouldn't be able to do it - after all it can play samples and it can process them. The way to make it quick is to pre ftt the impulse - I reckon acoustic mirror does this.
It seems to me, that so far, only the sts5000 does this.. when you do the "foramant shift" or whatever it's called, it's probably a fft or stft analysis, and then put through a heterodine resysnthesis or an additive type of synthesis.. That's probably the only type of "store analyzed data", or a stored matrix in a more "programmer" vocab, on the Pulsar platform. I am guessing that Creamware is trying very hard to keep its matrix code within company gates, as can be seen with the "analysis" feature on most of their new devices.
Surely the reason that pulsar doesn't have these kind of things is that creamware are aiming for a more main stream audience - with the retro analog and DAW stuff. Experimenters are probably a very small target audience - most people would see a granulator for example as a nice little toy -but not a reason to buy. These things would only come if creamware opened developement up to all and sundry.
Sad, but this is so true. As you may already know, mod2 is quote a versatile platform... if all you do is make VA synths. As soon as you want to do anything from a mathematic or scientific point of view, one faces hills after mountains of crazy unit conversions. It still seems to me, that the fact that Pulsar cannot do matrix or "write to file" sort of calls, (atleast in public domain) limits users and third party developers to making complete realtime devices. I still think a way to do synthesis based on analysis data, or in other words, some sort of resynthesis, would open up many new horizons. (as in my post in http://www.planetz.com/forums/viewtopic ... forum=9&19)
I think lots of you guys very well know what I'm talking about. Spectral synthesis, spectral morphing, most anything that defies time, timbre, pitch domain. I still don't think csound, cmix, supercollider, max/msp, or pd is a good solution as they are not fully realtime and requires extreme amounts of programming training. What do y'all think?

<font size=-1>[ This Message was edited by: kensuguro on 2002-02-08 13:02 ]</font>
User avatar
garyb
Moderator
Posts: 23246
Joined: Sun Apr 15, 2001 4:00 pm
Location: ghetto by the sea

Post by garyb »

i think that if people like you,who have knowledge as developers,keep on this track, then people like you and i will have new tools,eventually.
eliam
Posts: 1093
Joined: Sat Jan 05, 2002 4:00 pm
Location: Montreal, Quebec
Contact:

Post by eliam »

Hey, Ken! I read your little discourse through the link you gave and if it seems totally interesting, I think I lack some basic concepts to fully grasp all that you mean... Any hints on where and how I could update my brain in terms of sound synthesis so I can understand every detail of what is explained?

Thanks.
User avatar
kensuguro
Posts: 4434
Joined: Sun Jul 08, 2001 4:00 pm
Location: BPM 60 to somewhere around 150
Contact:

Post by kensuguro »

hehe... I was totally going mental from too much math when I made that post.. That's why I sound totally "mad scientist getting excited over microscopic things" sort of.

I learned a lot from text books, which I sold. But still, the rest can be read at the IRCAM site, which is half english and half french I think. Check them out at:
http://catalogue.ircam.fr/articles/index-e.html
I coulnd't understand a word of french, but somehow I managed to get a couple of nice papers. This is juicy stuff.

Also, learning about csound can be very vital in going into sound to its extremes. It's a programing language, sort of. I don't know how to program at all, and I hate it when I try, but mod2 can also be considered a programming language so if you can handle that, it shouldn't be too hard to make a few function calls, etc. (it's based on C)
You can get it at:
http://mitpress2.mit.edu/e-books/csound/frontpage.html
Yep, it's MIT. Quite the techno place to be.

And here's a lot about spectral synthesis, part of what I was talking about on my "new ideas" post. This synth is also free, but I still haven't got it to work yet.
http://www.iua.upf.es/sms/
These guys are a blast. If the software works as they say it would, then this is something not yet seen on big market.

Anyway, almost everything I was talking about on the post is FFT or STFT in some form. The rest is about how to fiddle with the data. The one about chopping up the audio is a form of concatenative synthesis. It's used in speech generation these days, and it really is, literally chopping up vocal tones and pasting them together to make it speak. So I was saying, that put together with sftf would be cool. Oh well, reading the IRCAM papers would be better than me trying to explain these things.

Anyway, hope these things interest you. They're probably the way synthesis is going, and it's always good to know things beforehand.

Oh yeah, and last but not least, if the thought of programming makes you go haywires (it sure makes me), then you might want to look into PD, short for Pure Data, which is a Max/MSP clone for Windows. You can get it at:
http://iem.kug.ac.at/pd/
Unlike mod2 and other modular synths, PD's modules are very elemental, like you start with osc module, then connect a number module to give it freq data, and that's when you get sound. But that's also why it's so versatile. It's a give and take deal. Easy, narrow versatility, or mega confusing, but does anything you tell it to do. But since PD's been around, there are premade modules (like comp, tb303emu, etc.) all over the net. It might be cool to take a look because it's like going under the hoods of mod2. There's lots to learn from PD.

<font size=-1>[ This Message was edited by: kensuguro on 2002-02-08 22:07 ]</font>
eliam
Posts: 1093
Joined: Sat Jan 05, 2002 4:00 pm
Location: Montreal, Quebec
Contact:

Post by eliam »

Thanks a lot! I just had a rapid look into the above links... Exactly the kind of info I was looking for... Hours of fun in perspective!

I think that this kind of knowledge will complete pretty well my studies in orchestration... These are two worlds that have to merge without a doubt, even if they still remain entities of their own...

Thanks for your very useful advice; very appropriate, as usual...! :grin:
caleb
Posts: 356
Joined: Tue Feb 05, 2002 4:00 pm
Location: Melbourne, Australia

Post by caleb »

I still think adding comb filters to synths was pretty innovative though.

Maybe it's really blah and common to other people, but Amphetamine was the first synth I'd used that had a comb filter on it. I notice that a lot of Zarg's synths have them two.

But I come from VSTi land and I'd never seen it before.

Sounded pretty good too.
Caleb

Happiness is the hidden behind the obvious.
User avatar
Neutron
Posts: 2274
Joined: Sun Apr 29, 2001 4:00 pm
Location: Great white north eh
Contact:

Post by Neutron »

could the fact that the scope FFT module is a DLL have something to do with it? there may be latency as the DLL has to work in windows rather than on DSP.

thats fine for displaying a graph but is not exactly a "synchronous device"
Post Reply