Music Manic wrote: Wed Mar 03, 2021 5:09 pm
The 24 Bit file will only become 32FLT if it’s processed Valis otherwise it will stay 24 Bit.
Also if you use a 16 bit asio driver, will the DAW will truncate the signal at the master out (or asio channels if you use more than 2). That’s important to know.
This wasn't true then, many DAWs and soundcard mixers were tested for being bit-transparent. Meaning if everything is left nominal, then does it pass unaltered. RME's fpga implementation was specifically built with this in mind, for instance. When I last checked, Logic introduced minor discontinuities. Now how important that is at ~-140dBf...well go back and read the threads here, on gearslutz and hydrogenadio.

I never concerned myself with 32bit FLT ASIO personally, but it was a big discussion at the time and many people insisted it made huge differences in their output (so do dedicated 'audiphile quality' USB cables for some, it seems).
Music Manic wrote: Wed Mar 03, 2021 5:09 pm
Could you expand on the transfer from DAW to Scope with regards to the signal? Is it just a discrete sample transfer?
Could SSD improve buffer under runs or are they just problems with the O/S?
Each DAW is different in how things are implemented, but basically you have your audio buffer which is typically either equal to the ASIO buffersize or some multiple thereof, and then a disk buffer. Let's call the ASIO buffer the 'process' buffer to use Logic's terminology, and then of course the disk buffer. Some time ago it was decided that OSX was better at managing the disk buffer than Logic, and similarly Window's disk subsystems are used on windows. Now I use Logic as a comparison, because it also has a 'mix' buffer, which is considerably larger. This increases efficiency because data can stay on-die and in-cache longer, rather than frequent round trips to feed everything. Steinberg has now added the 'ASIO safety buffer' or whatever they call it, and Samplitude & Sequoia actually had the process/mix buffer before Logic if I recall (very late 90's). I might be wrong on that, but I recall them touting it quite clearly in their marketing while Emagic had a considerably more arcane set of preferences to manage that basically meant the same thing.
Buffers are just memory buckets, as I'm sure you know. Read i/o & disk data in...process...write to the output buffer. If you try to process too much and the buffer can't be fed in time, an ASIO overflow occurs. If you can't read the data in time, again the same thing. A disk buffer may or may not be handled as a critical event, there was a time in Logic when I could simply tell it to keep playing on buffer underrun in the settings, but then you hear awful errors over your speakers.
As for your second question, let's deal with the 'problems with the O/S' part first. The OS tends to be the higher level of resource scheduling, so we can say making sure that all of our drivers and OS tuning being in order attends to this part of the equation, as much as possible at least. Disable background crap that might interrupt processing when resources are critically low, resolve drivers hogging CPU time and so on. You know this from our discussions here and from elsewhere, I'm sure.
But SSD's aren't always faster than spinning hard drives, especially SATA ones. The SATA bus doesn't have the bandwidth for any of these companies to necessitate adding additional controller lanes to those devices, so with limited lanes you have a limited number of chips that can be addressed effectively as well. This is somewhat improved by having very dense NAND now, but the chips that pack a lot on-die also do so by moving well beyond MLC & TLC, to the point where the number of layers in QLC's actual build are now commonly 64L and even 96L. These chips are slooooow. The approach of apportioning out some of the NAND area as 'faster' TLC & MLC (using fewer signalling levels so that the data written to that area is not only more 'robust' but also is faster to write due to the lower precision needed in voltage levels) helps, but that only helps until that area is exhausted to the point where it also needs to go through cleaning cycles before it can be written again. This is of course for writes, but this is where VDAT was mentioned to shine by preallocating, and I'm not sure if an SSD actually improves that (the area you're writing to may have nothing to do with the preallocated blocks on the SSD, as I'm not sure VDAT has been updated to consider any of this?)
In any case, low end SSD's also tend to eschew using any RAM buffer, which further affects performance under sustained conditions. It's not uncommon to find some of the SATA SSD's writing slower than a modern HD when it comes to sustained sequential writes, especially as the queue depth increases. Meaning, many tracks at once from a main DAW process that is running channels as concurrent threads.
NVME drives that are large enough and use good controllers can help, but again depending on the chipset being used you might find dedicated PCIe lanes a challenge. AMD has pushed Intel to relax this somewhat on their consumer chipsets, but both companies still try to push you to the HEDT chipsets if you want to run multiple NVME drives AND many other PCIe lane hogging devices (multiple gpu's and so on).
Anyway NVMe certainly isn't needed for audio users, but it's at least an improvement enough that I can't see building a new machine without using that for the operating system and associated program files etc. In my case a boot NVMe drive backed up by two SSD's tends to work fine, and I use better models (but not necessarily Samsung "Pro" models) for my project folders. And similarly, I have one dedicated to my Library files (kontakt stuff and other large libraries that will be read on the fly). Things are still massively easier than the days when I was managing multiple partitions on SCSI drives to apportion high performance areas versus longterm storage and so on, so I think if you stick with the better models an SSD is just fine.
Best SSDs: February 2021