Luna II + Firewire interface
gary; maybe I assumed it functioned like PowerCore or UAD cards? Is that an unreasonable assumption to make, given that their design is extremely similar? Oy. I'm not sure this card is really going to fit into my workflow at this rate. Perhaps I should sell it to someone who would get more use out of it.
Maybe this would be a lot easyer without using xtc.so I could access plugins like PSY-Q and Optimaster. I don't want to use the card for my normal audio stuff - just XTC plugin mode
You might use Scope together with eg Cool Edit or Audition (pre 2.0) together with your other system.
I'm not shure, but this is what i think at least?
What are you talking about, dude? I own a Powercore Element. It integrates flawlessly - you just open the plugins as VSTs in your sequencer and they render normally like any other VST plugin. I don't have a UAD-1 but even a cursory google search reveals plenty of users explaining how the instantiated plugins are used like any other VST. In fact, I've seen posts specifically recommending you *don't* use real time rendering for UAD cards, and that non-real time is preferable! Of course there seems to be debate on this topic, but there is no question that the UAD plugs CAN be rendered normally.
So again, it's really not an unreasonable assumption I made. And frankly the inability for Creamware plugins to be properly integrated & rendered like normal VSTs is not something to be defended.
So again, it's really not an unreasonable assumption I made. And frankly the inability for Creamware plugins to be properly integrated & rendered like normal VSTs is not something to be defended.
hmmm,
ok, it's possible that you are right. how about this, try a non-realtime rendering. if the plugin is rendered on the dsp and not the cpu, then offline is not possible as offline computations are done by the cpu. dsps are realtime processors, they don't function like a cpu does. the processes that run on them are different. rendering must be done in real time, to pass the info on to the system and render it. it's not a big issue though. a 4 minute song only takes a mere 4 minutes.
ok, it's possible that you are right. how about this, try a non-realtime rendering. if the plugin is rendered on the dsp and not the cpu, then offline is not possible as offline computations are done by the cpu. dsps are realtime processors, they don't function like a cpu does. the processes that run on them are different. rendering must be done in real time, to pass the info on to the system and render it. it's not a big issue though. a 4 minute song only takes a mere 4 minutes.
Alright - so really I should be looking at this card more like a piece of outboard gear with a 100% software interface. Like a multi-effects box with some built in DSP. That makes a lot more sense.
The routing on the SCOPE platform is still confusing to me. Can you walk me through routing audio from one source through PSY-Q and OptiMaster, then recording it to Audacity? Let's say I already have the rendered WAV that is unmastered.
The routing on the SCOPE platform is still confusing to me. Can you walk me through routing audio from one source through PSY-Q and OptiMaster, then recording it to Audacity? Let's say I already have the rendered WAV that is unmastered.
just load the ASIO Source and Destination modules (choose the number of ASIO channels in the source module to your needs), connect them to a mixer of your choice (Source to inputs, mix-out or monitor-out to Destination).
load the inserts you need.
Then launch your sequencer program.
It should 'see' the available ASIO channels then in the appropriate location of your sequencer, check the 'audio menu' or something.
Route the output of each audio track to one of those ASIO-out channels.
Route the ASIO input (the Scope mixer's out actually) to a track to record to, mute it to prevent feedbacking
Start rendering.
Pardon, start recording realtime
load the inserts you need.
Then launch your sequencer program.
It should 'see' the available ASIO channels then in the appropriate location of your sequencer, check the 'audio menu' or something.
Route the output of each audio track to one of those ASIO-out channels.
Route the ASIO input (the Scope mixer's out actually) to a track to record to, mute it to prevent feedbacking

Start rendering.
Pardon, start recording realtime

Simple direct routing:The routing on the SCOPE platform is still confusing to me. Can you walk me through routing audio from one source through PSY-Q and OptiMaster, then recording it to Audacity? Let's say I already have the rendered WAV that is unmastered.
ASIO2 > PSYQ > Optimaster > Wave Dest & (eg Analog Out for monitoring)
Remember : 1 output can be connected to 1, 2 or more inputs