look at this...

Discuss the Creamware ASB and Klangbox hardware boxes

Moderators: valis, garyb

Liquid Len
Posts: 652
Joined: Tue Dec 09, 2003 4:00 pm
Location: Home By The Sea

Post by Liquid Len »

Both these points are low level stuff that isn't even provided with the SDK (there is no Sharc assembler included) the system just chaines precompiled blocks (called atoms).
It's too bad there is no Sharc assembler - someone correct me if I'm wrong, but using the SDK is kind of equivalent to using Visual Basic instead of an assembler for Intel (or the next best thing to assembler, C and maybe C++). From a business point of view, it makes sense because if their code becomes easily accessible to the public, their copy protection is compromised. But it means that 3rd party developers operate at a disadvantage compared to Creamware's own coders, who can write in native mode and can now take advantage of things like 'optimizing inner loops'. Just like you can write applications quickly in Visual Basic but you are limited in how you can access the operating system (and how efficiently!).
Liquid Len
Posts: 652
Joined: Tue Dec 09, 2003 4:00 pm
Location: Home By The Sea

Post by Liquid Len »

On 2006-05-12 01:56, alfonso wrote:
Like in all comic masterpieces the source of the fun is when people use the same words ("runs") to mean different things and doesn't take immediate action to stop the misunderstanding... :lol:
I think that all this hoopla, Alfonso, is that your answer to Astroman's statement was basically correct, but was not related to his statement.

Astroman: the SDK isn't running on Sharcs - it produces code for them (from an Intel CPU)

Alfonso: SDK just loads it's modules on dsp exactly as Scope. It only has a different way to manage them and different functionalities.

Now, in what way do these statements contradict each other? Together they sum up the development process. SDK creates modules and then uses a device driver to establish some communication with the soundcard - it has to 'talk' to the motherboard, which can 'talk' to the Scope card through the PCI bus. The device driver is then able to load these devices it's created onto the Scope card's onboard RAM (it can't load them onto the DSP chips, these chips are processors not memory, and yes I know they have a certain amount of memory on a processor chip called registers, but these are not used to hold the program code, but to keep track of where the processor is, and what it's doing, in the program code). Possibly after loading a program, the device driver is able to tell the card to 'Run program at address 0x003300' or 'Reinitialize main program because the contents of memory have changed', obviously the card needs to be told that something is changed but there's a lot of ways you could design things. (Remember, there's a simple operating system running on the Sharc chips that handles running these modules and talking to whatever's on the other end of the PCI bus.)

If I've used the word 'run' in different contexts on this thread, I apologize for the inaccuracy. But maybe you could point out where.



<font size=-1>[ This Message was edited by: Liquid Len on 2006-05-12 14:19 ]</font>
User avatar
alfonso
Posts: 2225
Joined: Sun Mar 25, 2001 4:00 pm
Location: Fregene.
Contact:

Post by alfonso »

On 2006-05-12 03:50, astroman wrote:

.........
Both these points are low level stuff that isn't even provided with the SDK (there is no Sharc assembler included) the system just chaines precompiled blocks (called atoms).......

cheers, tom
And where are they chained?
I think that they are chained where they are loaded.
If a device and/or a connection exist in the environment, they exist in the dsp's.
No Sharcs, no connection is possible. Isn't it?
Now, we were talking about the SDK, not about an atom compiler that I newer saw. All the operations I make in SDK, except partially for those related to drivers and to relations with discs, memory and other hardware, are just performed on dsp's. When I load an atom I load it in dsp's. If I connect it to another atom, this happens in dsp's. I can say that almost all the work I do in SDK I do it in sharcs.

:smile:

_________________
Get an Experience of Space

<font size=-1>[ This Message was edited by: alfonso on 2006-05-12 14:19 ]</font>
Liquid Len
Posts: 652
Joined: Tue Dec 09, 2003 4:00 pm
Location: Home By The Sea

Post by Liquid Len »

Oops, delete this post

<font size=-1>[ This Message was edited by: Liquid Len on 2006-05-12 14:16 ]</font>
User avatar
astroman
Posts: 8452
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Post by astroman »

On 2006-05-12 14:10, alfonso wrote:
...All the operations I make in SDK, ..., are just performed on dsp's. When I load an atom I load it in dsp's. If I connect it to another atom, this happens in dsp's. I can say that almost all the work I do in SDK I do it in sharcs...
As LiquidLen mentioned, it could be a language/definition problem :wink:
infact whatever you intend with SDK is finally executed (performed) on the Sharc chips if you succeed - which is out of question considering your talent :smile:
Yet when you load or connect (chain) something in SDK that is NOT done on the DSPs.
These processes are done by the logic of the SDK application running on the Intel CPU.
It has a database about each module's requirements, connections etc (dsp.idx) and it determines the logic by analysing the graphical path.

Then it organizes the execution blocks in a certain way (you know this 'optimizing message' from other contexts, too) and loads them to the DSPs, which then start to perform your program.

In this context the system acts similiar to an interpreter or 'just in time' compiler - though this is an analogy.
As you correctly noticed there is no 'true' compiler/assembler combo like in typical developement systems at the machine code level.

On the other hand this makes perfectly sense as it would be impossible to integrate that arbitrary code into this type of environment.

It's like Lego, where the blocks need a defined connection pattern - hence everything that's supposed to become a new atom first needs this 'wrapper'.
CWA's keeps this tool under the hood as once it's mode of operation is known, anyone (interested) could reverse engineer the complete collection of atoms on the assembly level.

you're also correct that for the user of SDK it doesn't make much difference how all this is accomplished, but imho it's easier to deal with the system if it's mode of operation is clear.
Thanks to LiquidLen for explaining it with his words - it's a fairly complex thing anyway :smile:

cheers, Tom
Liquid Len
Posts: 652
Joined: Tue Dec 09, 2003 4:00 pm
Location: Home By The Sea

Post by Liquid Len »

message' from other contexts, too) and loads them to the DSPs, which then start to perform your program.
Tom
Just to be nit picky - it loads them onto the onboard RAM, if DSP chips are processors they don't contain the memory the programs are loaded into, they just have access to it. And probably the device drivers are able to talk to the DSPs to tell them what to run, how to run, etc
Liquid Len
Posts: 652
Joined: Tue Dec 09, 2003 4:00 pm
Location: Home By The Sea

Post by Liquid Len »

Possibly the sharc chips could have built in memory but I've never heard of that kind of design. Processors would need *small* amounts of storage for things like registers, stack pointers, and data transfer buffers. But to include a lot of memory in a chip would detract away from the design of a fast processor, I would think.

<font size=-1>[ This Message was edited by: Liquid Len on 2006-05-12 16:05 ]</font>
User avatar
astroman
Posts: 8452
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Post by astroman »

from the <a href=http://www.analog.com/UploadedFiles/Dat ... pdf>Analog PDF of the 21065L</a>
...On the ADSP-21065L, the memory can be configured as a
maximum of 16K words of 32-bit data, 34K words for 16-bit
data, 10K words of 48-bit instructions (and 40-bit data) or
combinations of different word sizes up to 544 Kbits. All the
memory can be accessed as 16-bit, 32-bit or 48-bit.
While each memory block can store combinations of code and
data, accesses are most efficient when one block stores data,
using the DM bus for transfers, and the other block stores
instructions and data, using the PM bus for transfers....
64 kilobytes of memory may not sound terribly much in M$-Office type of code sense, but it was (for example) enough to hold the complete Quickdraw graphics library of the original Mac - or all other OS related Toolbox routines (the Rom was 128KB split into these 2 parts)

the only thing that makes me doubt that dsp code is executed (more or less) directly from main memory is that it could be 'hi-jacked' rather easily, rendering the protection useless.
Remember the days of the C64 with those snapshot buttons that froze memory to disk :grin:

anyway I have no idea about the exact size of the code that is performing in the end, so I wouldn't mind if it's different...

cheers, Tom
symbiote
Posts: 781
Joined: Sat May 01, 2004 4:00 pm

Post by symbiote »

Loading and sequencing "pre-compiled atoms" still totally qualifies as programming, however you would like to interpret the word. Even just loading a pre-compiled program into memory totally qualifies as "programming" and is proper use.

On CISC CPUs, even low-level assembler is exactly like loading "pre-compiled atoms", as the instructions are all micro-coded sequences of even-lower-level instructions that are hidden from the application programmer. The even-lower-level instructions are wired, while the visible assembler instructions are a sequence of the wired instructions. This lets chip designers build a large amount of instructions, some of them fairly complex, into their processors. On RISC CPUs, you manipulate the wired instructions directly.

Programming absolutely DOES NOT require any sort of compiling. Compilers are just a handy tool that lets people write programs in a more human-readable way. But you can totally skip that part, load a hex editor, and enter the intructions in binary directly. It's tedious, but entirely do-able with the processors technical specifications documentation.

With the SDK, when you load a module, it also pretty much instantly gets loaded on the DSPs the second it appears on your screen. When you connect 2 modules together graphically, it also gets done on the DSPs.

You can easily test this out for yourself (if you have the SDK =] if you don't, you likely shouldn't be arguing about it.) No need to optimize anything or press any button to transfer the DSP code to the DSPs, as far as any human is concerned, this is done instantly, and can be modified pretty much real-time.

The SDK/Scope graphical interface "runs" on the Intel CPU, while the "signal processing" stuff "runs" on the DSPs. So on a purely technical basis, it runs on both. Saying it runs on the DSPs IS proper, because part of it (the application-critical part of it, too) DOES run on the DSPs. Your OS/CPU can't do squat with the SHARC atoms.

This being said, there are also modules in the SDK that will run on the PC and not the DSPs. If you open the little Help/? window and over your mouse over a module, it'll tell you wether a module runs on the PC or the DSPs.
User avatar
astroman
Posts: 8452
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Post by astroman »

On 2006-05-13 00:41, symbiote wrote:
Loading and sequencing "pre-compiled atoms" still totally qualifies as programming, however you would like to interpret the word...
never questioned that
...
Programming absolutely DOES NOT require any sort of compiling. Compilers are just a handy tool...
as is the SDK
...
With the SDK, when you load a module, it also pretty much instantly gets loaded on the DSPs the second it appears on your screen. When you connect 2 modules together graphically, it also gets done on the DSPs...

...or press any button to transfer the DSP code to the DSPs, as far as any human is concerned, this is done instantly, and can be modified pretty much real-time...
that's why I compared it with an interpreter or a JIT compiler.
... Saying it runs on the DSPs IS proper, because part of it (the application-critical part of it, too) DOES run on the DSPs. ...
another hen-egg-problem ? :grin:
if you weigh the importance of the result higher than the process of constructing it, you could see it this way.
From the point of control flow the Intel code is the layer that keeps it together. It's always earlier, even if the loading process is 'pretty much realtime' as you call it.

my point was only to clarify that the DSP card isn't a computer in the computer as it sometimes seems to be described - and imho my description of the design process is fairly accurate.

cheers, tom

<font size=-1>[ This Message was edited by: astroman on 2006-05-13 02:28 ]</font>
User avatar
alfonso
Posts: 2225
Joined: Sun Mar 25, 2001 4:00 pm
Location: Fregene.
Contact:

Post by alfonso »

On 2006-05-13 02:27, astroman wrote:


another hen-egg-problem ? :grin:
if you weigh the importance of the result higher than the process of constructing it, you could see it this way.
From the point of control flow the Intel code is the layer that keeps it together. It's always earlier, even if the loading process is 'pretty much realtime' as you call it.
That's not correct. The movement of the mouse on a knob doesn't mean anything by itself, you can do it on a graphic app of any kind and this won't generate any code. the "process" is what "runs" on the "processor", the code gets real only in the dsp's.

The way you put it is the same as saying that the (analog) photo is in the subject and the lenses, while the photo exists only as the result of chemical reactions on the light-semsitive material it's impressed on.

The real object (dsp device, connection) appears in the sharcs, it runs in the sharcs, it's a state of the sharc processors. If the code is a math calculation, this calculation happens in the sharcs and nowhere else. You can draw an airplane project on paper, but if you sit on that paper it won't bring you anywhere. If you want make those abstractions real and happening, you have to get the real airplane, that's the only way to "run" that airplane.

So real is the fact that you can say that the devices "are" in the sharcs, that you have phase issues because of code displacement on different sharcs, FleXor took care of this issue because it's loaded with a "load on single chip" attribute.

If "running" is logically connected with "processing" and this is out of doubt, the largest part (critical has been said) of SDK and Scope codes "runs" on sharcs. Graphics are just processing the user interface and not the devices. The Cpu and the system manage the connection between the interface, the card and the other apps, but nothing that is dsp code "runs" even remotely on the cpu.

:smile:
User avatar
Shroomz~>
Posts: 5669
Joined: Wed Feb 23, 2005 4:00 pm
Location: The Blue Shadows

Post by Shroomz~> »

<font size=-1>[ This Message was edited by: Shroomz on 2006-08-24 06:37 ]</font>
User avatar
astroman
Posts: 8452
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Post by astroman »

On 2006-05-13 05:29, alfonso wrote:
...The real object (dsp device, connection) appears in the sharcs, it runs in the sharcs, it's a state of the sharc processors. If the code is a math calculation, this calculation happens in the sharcs and nowhere else. ...
yes it does - and there's not a single line of doubt by me about this fact.
You may have forgotten by all-this-what-runs-on-what that the original question was about the SDK as a developement tool, about it's main purpose which is to stick sometthing together that's usable in a certain context.
That the SDK executes this stuff immediately doesn't change the sequence in which things are run.

There is no part in SDK that builds logic or dataflow based on Sharc code execution, let alone graphically interactive.

Of course you could use the SDK just as a mega routing window, but it's purpose is the developement and testing of devices.
And I've been writing only about this process.

I really don't see why it's so difficult to understand - it's rather trivial and in no way intended to de-valuate something.
The Intel code puts the sh*t together and the Sharcs execute it a few microseconds (or whatever) later.
If it would be different, the SDK would also run on Macs because it doesn't need the host CPU. Even the GUI lib wouldn't stand in the way as it's cross-platform :wink:

sorry Shroomz, I'm not a gambler - though I would have liked to increase CWA's business.
Btw you should improve your reading - I adressed YOU with the free plugin originally, and that was absolutely serious :grin:

cheers, Tom
User avatar
Shroomz~>
Posts: 5669
Joined: Wed Feb 23, 2005 4:00 pm
Location: The Blue Shadows

Post by Shroomz~> »

<font size=-1>[ This Message was edited by: Shroomz on 2006-08-24 06:38 ]</font>
User avatar
astroman
Posts: 8452
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Post by astroman »

Liquid Len described the SDK's mode of operation
Alfonso quoted an ambigous statement from CWA's page
you seemed to be very interested to clearify this information '...just for anyone who doesn't'
so what ?

this isn't about someone being right or wrong, but about a documented technical process
btw I've collected all information I could get about DP years before I bought my first Pulsar, because I'm highly interested in this type of tools

cheers, Tom
On 2006-05-10 16:36, Liquid Len wrote:
... From what I can tell, the SDK runs on an Intel, 'compiles' modules for the Sharc chips (takes the graphic routing information and all the properties of the modules and distill them into some format that can be executed on the sharc chips) and once they're compiled, load them onto the card's onboard memory (I would guess) where the Sharc chips can then 'get at them' to run them. In other words - the SDK is a program that runs on an Intel that creates programs for another processor (Sharc). It doesn't run on the Sharcs, the programs it creates DO...
On 2006-05-11 06:15, Shroomz wrote:
On 2006-05-10 15:51, alfonso wrote:
Sorry, but SDK just loads it's modules on dsp exactly as Scope. It only has a different way to manage them and different functionalities. Or do I miss something?
...Alfonso, can you confirm or prove this? I believe you know what you're talking about, but just for anyone who doesn't, any chance you can prove this? ...
On 2006-05-11 17:15, alfonso wrote:
...CWA site SDK page, scroll down a little:
http://www.cwaudio.de/page.php?seite=sdk40pr&lang=en
...
symbiote
Posts: 781
Joined: Sat May 01, 2004 4:00 pm

Post by symbiote »

enculage de mouche ftw!
marcuspocus
Posts: 2310
Joined: Sun Mar 25, 2001 4:00 pm
Location: Canada/France

Post by marcuspocus »

zobi la mouche....


rofl! :grin:
User avatar
astroman
Posts: 8452
Joined: Fri Feb 08, 2002 4:00 pm
Location: Germany

Post by astroman »

notamment ton contribution, Symbiote :wink:

merci bien, Tom
User avatar
alfonso
Posts: 2225
Joined: Sun Mar 25, 2001 4:00 pm
Location: Fregene.
Contact:

Post by alfonso »

On 2006-05-13 13:23, astroman wrote:


That the SDK executes this stuff immediately doesn't change the sequence in which things are run.
......
The Intel code puts the sh*t together and the Sharcs execute it a few microseconds (or whatever) later.


cheers, Tom
As I said before, we just used 2 different meanings for the word "run". I thougt of it as the processing needed to execute it's code, you used it to indicate the setting of the instructions to be executed.
It's only a matter of wich part of the work is to be considered more important....well the truth is that, even if the processing work done by the sharcs is infinitely heavier and predominant in the sense of power involved, both the components are essential to the process. But this was clear to me from the first moment.

:smile:
Post Reply