On 2003-10-05 06:24, King of Snake wrote:
...If there really was no limit then there would be no difference because I'm sure there are programmers in the native world that are on par with CW's programmers.
Yes, that's certainly true - but programmers with that amount of talent, who are able to write a low level (native) math library of that quality are an extremely rare species.
While CW can start with the building blocks from Analog Devices, those dudes would have to start at point zero.
That's why I called it theoretical. It is a totally unrealistic assumption, considering the fact that a product has a deadline and of course limited financial resources.
In the earlier years when CPUs had a much simpler structure you frequently found programmers (still a minority, though) able to code on the machine language level from scratch.
Remember all those famous C64 games. 8 bit - 1 MHZ, but with an astonishing performance.
Later, with more complex CPUs, one usually let a compiler do the dirty job and generate the object code only as a first stage.
This output was disassembled again and critical routines were extracted and 'hand optimized', then assembled again and replaced the original machine generated output.
One could expect a 5 to 10 times better performance from this procedure.
In other words: you could make it an either faster or more sophisticated processing, whatever applied more.
Or in todays measures: instead of 1 GHZ you effectively get a 5 to 10 Gig machine.
I'm actually not in that kind of business, but from the comparison of the compiler output (the object code) of current apps like the Reactor family of products or even simple office applications you can guess that there's not a single bit optimized.
1 KiloByte of object code results in roughly 10-20 KB of assembler source - now go check your native apps for size
cheers, Tom