Page 1 of 1

Posted: Thu Aug 18, 2005 9:09 am
by keyznstuff
Hi,

thanks for all the tips and advice from everyone who contributed to my earlier post.

I have decided to hook myself up with a Scope Pro card. I´m very excited about setting it up, but unfortunately won´t be home until november, so that will have to wait!

I plan to use it mostly for the mixing environment and the 3rd party plugins available out there. I may eventually use VDAT for tracking.

I also have a UAD-1 card and would like to use the fx from there. I imagine that I´ll have to rout out of SFP to get to the UAD-1 effects and then rout back in again. if anyone has any general tips and hints about this, that would be greatly appreciated. i assume i will be have to deal with the joys of latency at some stage.

kind regards, joel.

Posted: Thu Aug 18, 2005 9:13 am
by Mr Arkadin
Well done - although it's a long time till November, hope it goes quickly for you so you can get SCOPEing. Unless you use SCOPE exclusively i'm afraid you will have to deal with the UAD's latency as far as i know. i think there are some people who use these cards so hopefully they'll chime in and tell you i'm wrong. Of course with SCOPE there's hardly any latency at all. Maybe you could do you main writing using SCOPE and your sequencer, then use the UAD when you want to add certain effects - otherwise that latency might drive you nuts!

Best regards,
Mr A.

Posted: Thu Aug 18, 2005 10:36 am
by Liquid Len
AFAIK there is no way to use the UAD-1 card directly from SFP - you need a VST host. I use both cards, but the UAD-1 is only for effects on something I've already recorded (using Cubase). You can't track with a UAD-1 card, unless you really don't mind the latency. There's plenty of good effects you can use to track with in SFP.

Posted: Thu Aug 18, 2005 2:13 pm
by darkrezin
As the others said, forget using the UAD1 realtime. It's only useful once you've bounced something down to audio in your sequencer.

Posted: Thu Aug 18, 2005 2:32 pm
by R.D. Olivaw
I'm using UAD1 realtime for external midi sources very often. You just have to trigger the MIDI parts earlier to compensate the UAD latency. With some VST hosts it's really quick and easy to setup.
But tracking a musician with UAD1 is almost impossible, for each UAD plug adds a minimum of 2 times the audio buffer latency. And the UAD1 does not perform very well at "small" buffer values (most people use the UAD1 with a 1024 buffer setting to avoid clicks & pops problems).
Joel, I think you shouldn't send your audio from CW Scope to VST host for UAD, then back to Scope: this way, you wont benefit the auto delay compensation many VST hosts have. Just play your audio sources directly in the VST host where you're using UAD1, then audio tracks/groups to CW Scope. The latency of the UAD plugs will be compensated for.

Posted: Thu Aug 18, 2005 4:34 pm
by darkrezin
At one time, UA announced a UAD card with an ADAT I/O, presumably with some kind of application to route audio into the effects. I couldn't wait for the day it came out.. unfortunately that day never came.

I guess there was not enough demand for it to justify development. It's such a shame that VST is the de facto paradigm these days.. although I've been hearing of problems with the Virus TI development, and the actual thing which is finally delivered will disappoint a lot of people IMHO (due to the limitations of VST and USB).

Posted: Thu Aug 18, 2005 8:09 pm
by hubird
I hope so, I have a real one :grin:

Posted: Thu Aug 18, 2005 8:21 pm
by R.D. Olivaw
Ua prefered to develop more plug-ins for UAD1 instead of that exciting UAD2 with IO. I guess it was the right strategy, UAD1 is a best seller now.

i agree VST is a poor standard, and Steinberg is not what we can call an innovative team. A lot of people are waiting for VST3...

Posted: Fri Aug 19, 2005 2:09 am
by astroman
bottom line from a <a href=http://www.soundonsound.com/sos/apr04/a ... an.htm>SoS article</a>

...Ultimately, it will always remain next to impossible to automatically compensate for plug-in delays in a multitasking computer environment, while at the same time providing low-latency input monitoring, since the two approaches are mutually exclusive. However, one possible solution would be to try running a dedicated DSP card such as the PowerCore or UAD1 alongside a DSP-assisted soundcard such as the Mixtreme or original Pulsar. This would allow 'zero latency' monitoring with DSP effects on your live input signals, with the option of further high-quality delay-compensated plug-in insert and send effects — possibly the best of both worlds!
imho latency is often way overestimated - an acoustic signal needs 10 ms for a 3.5 meter distance, and some instruments in a symphony orchestra are 5 (or more) times farther displaced - yet the musicians get along with it because they are used to it (it's not all the condictor) :wink:

cheers, Tom

Posted: Fri Aug 19, 2005 2:23 am
by darkrezin
I think it's a really big problem if you're playing an instrument and it responds late. It IS possible to get used to it to an extent but it's not at all ideal and is very limiting.

Apart from anything it's a step backwards. What's the point of all this cool technology if you have to change your playing style to take advantage of it?

Posted: Fri Aug 19, 2005 4:00 am
by alfonso
On 2005-08-19 03:09, astroman wrote:

imho latency is often way overestimated - an acoustic signal needs 10 ms for a 3.5 meter distance, and some instruments in a symphony orchestra are 5 (or more) times farther displaced - yet the musicians get along with it because they are used to it (it's not all the condictor) :wink:

cheers, Tom
The problem is not so hard for the sound that is received, a trained musician has no difficulty with playing a little "forward" or "behind", some genres require it as a style typicity, but the problem is very hard for the response to an action you perform directly, like the sound coming out when you play. An inconsistency, even very small is extremely stressing for the brain, you might keep up with it, but your inspiration and expressivity are screwed. Also in an orchestra, the timing is given by a director, whose movements, being a visual stimulus, travel much faster, and each one relates to that,much more than to the sound around.

Posted: Fri Aug 19, 2005 5:31 am
by astroman
On 2005-08-19 05:00, alfonso wrote:
...Also in an orchestra, the timing is given by a director, whose movements, being a visual stimulus, ...
that's why I wrote 'NOT ALL the conductor' above - of course the visual movements are fast as light.
Yet still their perceiption is kind of 'stressed' by the timing difference and still they manage to get it right (often at a very high technical level), could it be because they never bothered ? :wink:

not that I'm against low latency, but I'm against the usual ...it cannot work for me because it's a few ms late - ultraprecise drummers like Darkrezin excluded :wink:

everyone may be different, but you probably did some of your best tunes 'live' when your brain (in a sense of technical consciousness) wasn't involved at all, Alfonso - didn't you :wink:

cheers, Tom

Posted: Fri Aug 19, 2005 6:27 am
by Immanuel
On 2005-08-19 05:00, alfonso wrote:
On 2005-08-19 03:09, astroman wrote:

imho latency is often way overestimated - an acoustic signal needs 10 ms for a 3.5 meter distance, and some instruments in a symphony orchestra are 5 (or more) times farther displaced - yet the musicians get along with it because they are used to it (it's not all the condictor) :wink:

cheers, Tom
The problem is not so hard for the sound that is received, a trained musician has no difficulty with playing a little "forward" or "behind", some genres require it as a style typicity, but the problem is very hard for the response to an action you perform directly, like the sound coming out when you play. An inconsistency, even very small is extremely stressing for the brain, you might keep up with it, but your inspiration and expressivity are screwed. Also in an orchestra, the timing is given by a director, whose movements, being a visual stimulus, travel much faster, and each one relates to that,much more than to the sound around.

Putting on a bit of good mooded bashing mode here
I guess everybody working in anything digital are screwed then? AD/DA converters often cost in the region of 2-3ms each way :wink:
So this will include people playing digital synthesizers (Scope versions included) and people running through digital effects. Acording to this, most guitar players simply suck (lacking expression and inspiration) - especially the old pros who never went gear nerding but just played music. A lot of them have some boss digital delay. If they are fancy, they might have some big TC digital thing. Hey, almost all guitarists are off, because they hear their instruments 3ms/meter delayed ... goes for bass players and analog keyboard players too.

I guess I kind of lean to the Astroman side of this discussion. The effect of a small amount of latency is highly overrated. But then again, I once mett a bass player with a good reputation, and he insisted, that he couldn't play right unless he was right in front of his rig ...

So who am I to tell? Maybe it matters more to some than to others. Maybe the 1 or 2 times (AD)/DA conversions are ok in itself, but when extra latency is added, it passes the critical point?

Posted: Fri Aug 19, 2005 6:27 am
by alfonso
On 2005-08-19 06:31, astroman wrote:
... but you probably did some of your best tunes 'live' when your brain (in a sense of technical consciousness) wasn't involved at all, Alfonso - didn't you :wink:

cheers, Tom
Well, that's true, but because my "technical" mind was not bothered by issues...anyway, we all should know here how the ultra-low latency of playing on sharcs is better than any other solution.... :wink:

cheers..

Posted: Fri Aug 19, 2005 7:22 am
by wolf
Hi,

interesting discussion :smile:

However I think, in the first place there are two issues .. latency and timing. While musicians can compensate latency (sometimes up to one second or even more), you can hear, if they are good, if the timing is still there or not.
Timing is about micro seconds, latency about milli seconds, so timing is much more critical in that regard.

Now back to latency .. I believe, it has a lot to do with environment and psycho mind blowing stuff. If the mind knows, you are wearing headphones, it expects immediate response. If your eyes see the player right beside, but his sound comes about one second later, because your monitors are out of order, you're screwed .. but you go on someway because now there is no mindfucking happening like with direct ear input as it is with headphones.

.. well .. hope you got my point :wink:

Posted: Mon Aug 22, 2005 2:16 pm
by darkrezin
Interesting discussion indeed. There's many sides to the latency issue.. perception plays a big part for sure.

I'm certainly more of computer geek than a muso, but even my limited experience with playing instruments like guitar and drums has shown me that 'response' is extremely important. Whenever I record good musicians (which is a lot more common than me recording myself), it's very important to keep the vibe as natural as possible. There are enough distractions to the creative process already for latency to be yet another factor messing things up.

It is a lot more noticeable with percussive elements, but I can see its effect when recording synths too - the timing of the groove simply doesn't exactly match what the musicians played. Whenever possible, I suggest using a hardware synth or SFP device to monitor the sound, putting the MIDI through whatever later on. This always solves the problem.

While it may be possible to play an instrument with 1 second latency, I don't think it would be the most creative and expressive session. Latency has been a problem with every musician I've encountered, and some of them are incredibly talented.

Immanuel - yes there's definitely a point where it becomes unacceptable. Also, with digital, you cannot lump DSP and CPU powered processing into the same bracket. DSPs process audio sample-by-sample. CPU's can only process audio in 64-sample blocks at a time *at best*, as far as I know. This is a very very small amount of time (1.5ms), but consider that DSPs can calculate audio with 64x better timing resolution. The UAD/Powercore etc don't count as DSP devices in this regard, as they use the block-based VST technology to interface with audio.

With all things there are compromises. CPU-driven audio has 1 important advantage - sample accuracy on playback. It depends what is more important to you... if you program your parts by piano roll you won't even notice there's a problem. If you're more of a 'live' musician however, I think that you would prefer the feel of dedicated DSP hardware.

I personally see the block-based approach as a regressive step, but it's weird how people just want to run more instances of stuff now (my personal suspicion is that it's a substitute for really creative musical ideas) that they'll do anything to get it. I have seen Pro Tools TDM & HD users who would rather run 20 instances of an RTAS (native) version than 2 or 3 on their DSP hardware. yet the RTAS versions clearly sound inferior.

<font size=-1>[ This Message was edited by: darkrezin on 2005-08-22 15:29 ]</font>