PDA

Archiv verlassen und diese Seite im Standarddesign anzeigen : übersetzung dx9.1 artikel?


mapel110
2004-01-24, 04:47:36
hat das schon jemand angedacht?
wenn nicht, dann setz ich mich dieses wochenende dran.

p.s. was ich gerade wieder bei anandtech lesen musste, trieb mir nämlich die tränen in die augen.

"I've read the latest rumors on FP32 support in R420, but I don't see how that's any major news. As far as I know, the DirectX 9.1 spec (with pixel shader 3.0 support) calls for full precision to be 32-bit floating point, so if ATI wants to be DX9.1 compliant they will have fp32 support."

wurde ja auch auf rage3d.com verlinkt.

Leonidas
2004-01-24, 15:46:10
Eben deswegen sollten wir das übersetzen. Noch dazu, wo in nächster Zeit einige größere Artikel kommen werden.

KillerCookie
2004-01-24, 17:05:01
Hallo,
ja der Artikel wäre auch dringend notwendig da es ja viele gitb die glauben das dx9.1 wirklich kommt...

also folgendes:
@mapel110
würdest du den artikel alleine machen? alle anderern können ja erstmal beim masken artikel mithlefen... (der 9.1 artikel ist ja nur eine seite).

also wenn mapel nicht will: wer möchte gerne?

MfG Maik

mapel110
2004-01-24, 20:27:48
ehhh, bin ja schon sogut wie am anfangen :D
werd mich heut nacht mal durchquälen.

mapel110
2004-01-25, 01:41:30
When will DirectX 9.1 arrive?

To mention it from the start: DirectX9.1 will most likely never be released. but there is still everywhere the rumour, that DirectX 9.1 will boost Nvidia cards performance up to 60 %. This rumour we will now analyze.

At first the question: Would it be possible? Clear answer: No. The pixelshader of the FX series has quite a lot more features than the hardware of the actual Radeon-hardware, but it does less operations per clock cycle. If the shader-program is coded in a radeon-optimized format, the Cine-FX-Architecture will additionally be slowed down. The way instructions are coded on the Radeons causes a compulsory break on the Nvidia-hardware. The Radeon are not so susceptible for that, admittedly it's because of the simpler shader-technics.

To make the Cine-FX-Functionality powerful at all, Nvidia had to make some compromises. If you keep strait to the optimizing rules for FX-shader, you earn in some cases more performance than on Radeon-hardware. Outside of special cases, the Radeon stays on top of overall performance, Nvidia can only compensate it with higher clockspeeds.

A shader-program has its specific code. A DirectX 9.1 can't do anything about that. Shader-programs are nowadays likely written in a "high level language" (HLSL). In the DirectX-Development-Tools a compiler translates the program into the "pixelshader-language". Meanwhile there is an shader-profile called 2_A (which is optimized for the 2_X shaders, and thus for geforce FX) and of course further on the profile for 2_0 shader.

That means the additional work for developers keep within a limit. The same source code has to be compiled twice and the game has to detect which graphics card is in the computer to pick the best shader code. The actual DirectX-Development-Tools deliver functions to proof which profile is the best for the running hardware.

The developers normaly dont deliver the pure HLSL code to the DirectX-runtime. If they do so, a new DirectX would be able to consider and to accelerate new hardware. Unfortunately it is not that way. Developers dont like to show their work. So DirectX has to deal with the compilation.

Nvidia is knowing the difficulty that almost all older shaders are written for Radeon-hardware. That was not an evil intension of the developers. ATI was just the first, delivering a DirectX9-compliant Hardware-accelerator (Radeon 9700 pro, long before Geforce 5800 ultra) and also delivering optimization recommendations for their chip.

To bear two things in mind: Performance is better on Radeons. The Geforce-chips have less raw-power and are also susceptible for efficient collapses when Cine-FX-Recommendations are violated. Radeons are quite egged because of the very simple optimization recommendations (and the fact, that some thing are commen with the pixelshader 1.4). If the Geforce-FX-shader now become 60% faster they would get ahead of the Radeons. But that can not be truth because the raw power of the Geforce is less.

To explain where the 60 % came from: Nvidia delivers a document for the "Unified Shader Compiler":

"The NVIDIA unified compiler technology efficiently translates these operands into the order that maximizes execution on NVIDIA GPUs—texture, texture, math, math. This one compiler feature can deliver a 60 percent performance improvement for DirectX 9 applications, and points out how a minor programming difference can result in significant performance impact on programmable GPUs."

This is a new driver feature in the detonator 50 and above (now called forceware). Radeons catalyst also optimizes shader programs. For Geforce-FX-cards it is quite more important. And in deed the new driver delivers about 10 to 20 percent more performance in DirectX9 games without reducing the image quality. In syntetic clean shader benchmarks you can achive 100 percent more performance (depends on optimization of the raw material). 100 % more performance compared to the old driver, not compared to the Radeons. Also keep in mind, that this driver doesn't need DirectX9.1. It's fully functional with DirectC9.0.

Pixelshader 2.0 and 3.0 are defined in DirectX9.0. Shader 4.0 will be provided in DirectX next (DirectX 10). So there is no need for an "interim version". Extended features will be supported by caps and not by new shader-versions. Of course the development tools of DirectX get frequently updates. So for developers there could be another built of DirectX, but for the end consumer it changes nothing even if there will be an DirectX9.1.

We fear, that a lot of manufacturers will come up with the idea to promote the shader 3.0 support as directx9.1 compliant, like also seen before in quite a lot roadmaps. Somebody associated directx9.1 with 60 % performance gains and spreaded this message. Unfortunately lots of online and offline magazins wrote and write this stuff unproved and without any backround knowledge. There are quite some information about DirectX next, but Microsoft never said a word of releasing a DirectX9.1 runtime.

Geforce-FX-user have to bury the hope for 60 % more performance, because the performance has to be available in hardware. There is a compiler profile for 2_X and additionally the recompiler in the forceware 50 drivers do what it can. In our opinion, Geforce and Radeon are egged in case of shader performance.

Unless Nvidia improves the recompiler and uses FX12 for directx-pixelshader. FX12 calculating power is available in raw masses. Admittedly it's not easy to use FX12 when FP24 precision is recommended. directx does not permit it also.

Albeit, you can assume that big performance gains of the actual Radeon and Geforce architecture will not come through drivers or even out of an new DirectX-Release. Both manufacturers support single games and developers (quite time and work intensive) because it's the only way to get more performance.

Thanks for demirug for giving us backround information about this stuff.


FERTIG :bounce: :smokin:

GloomY
2004-01-25, 11:46:33
Hier mal ein paar Verbesserungsvorschläge:Original geschrieben von mapel110
When will DirectX 9.1 arrive?

To mention it from the start: DirectX9.1 will most likely never be released. But there ist the persistent rumour (kein Komma) that DirectX 9.1 will boost performance for nVidia cards up to 60% (ohne Leerzeichen). We will now anaylse this rumour.

First of all: Would this be possible? Clear answer: No. Though the pixelshader of the FX series has quite a lot more functionality ("features" ist "Ausstattung", f. trifft es hier imho besser) than the hardware of the current Radeons (nicht zwei mal "hardware"), but it executes (oder: is only capable of executing ) less operations per clock cycle. If the shader program (zwei Wörter) was coded in a radeon-optimized format, the CineFX architecture will additionally be slowed down. The way instructions are coded in favour of the Radeons causes execution interruptions on NVidia hardware. The Radeons are not so vulnerable, admittedly it's because of their simpler shader implementation.und weiter geht's: ;)
Original geschrieben von mapel110
In order to make the CineFX functionality powerful at all, NVidia had to make trade-offs for the design. Complying with the complex optimization rules for FX shaders, you may in some cases earn more performance than on Radeon hardware. Apart from special cases, the Radeon stays on top regarding overall performance. NVidia can only compensate (kein "it")with higher clockspeeds.

A shader program to be executed by graphic cards exists the way it is. DirectX 9.1 can't do anything about that. Shader programs are nowadays likely to be written in a "high level shading language" (HLSL) (Wenn man das schon erwähnt, dann auch die korrekte Ausschreibung der Abkürzung) . With help of the DirectX Development Kit a compiler translates the program into the "pixelshader language" (ohne bindestrich). Meanwhile, there is a shader profile called 2_A available (which is optimized for the 2_X shaders and thus for Geforce FX) and of course there is the 2_0 shader profile.

That means that the additional work for developers supporting two architectures is kept within a limit. The same source code has to be compiled twice and the game has to detect which graphics card is in the computer in order to select the best shader code. The current DirectX Development Environment (Development Kit?) provides functions to determine which profile is the best for the running hardware.Puh, :flöt: erstmal Pause :)

-|NKA|- Bibo1
2004-01-25, 14:49:21
Ich war so frei und hab erst mal copy-and-paste gemacht :
When will DirectX 9.1 arrive?

First: DirectX9.1 will most likely never be released. but still the rumour persists that DirectX 9.1 will boost Nvidia cards performance by up to 60 %. We will now try to analyze whether there is any truth in this assumption.

First : Is this possible? Clear answer: No. The pixelshader of the FX series has quite a lot more features than the hardware of the actual Radeon, but it does less operations per clock cycle. If the shader-program is coded in a radeon-optimized format, the Cine-FX-Architecture will additionally be slowed down. The way instructions are coded on the Radeons causes a compulsory break on the Nvidia-hardware. The Radeons are not so susceptible to this, because of the simpler shader-techniques implemented. :???: Versteh den beabsichtigten Sinn nicht so ganz .

To make the Cine-FX-Functionality deliver an acceptable performance at all, Nvidia had to make some compromises. If you stick to the optimizing rules for the FX-shader, in some cases you gain more performance than on Radeon-hardware. However, apart from these special cases, the Radeon stays on top of overall performance; Nvidia can only compensate by using a higher clockspeed.

A shader-program has its specific code. A DirectX 9.1 can't do anything about that. Shader-programs are nowadays likely written in a "high level language" (HLSL). In the DirectX development tools a compiler translates the program into the "pixelshader-language". Meanwhile there is a shader-profile called 2_A (which is optimized for the 2_X shaders, and thus for geforce FX) and of course further on the profile for 2_0 shader.

That means the additional work for developers is kept within limits. The same source code has to be compiled twice and the game has to detect which graphics card is in the computer to pick the best shader code. The actual DirectX development tools deliver functions to check which profile is the best for the hardware.

The developers normaly don't deliver the pure HLSL code to the DirectX-runtime. If they did so, a new DirectX would be able to consider and to accelerate new hardware. Unfortunately it doesn't work that way. Developers dont like to show their work. So DirectX has to deal with the compilation.

Nvidia knows about the problem that almost all older shaders are written for Radeon hardware. That was not an evil intention of the game-developers. ATI was just the first, delivering a DirectX9-compliant Hardware-accelerator (Radeon 9700 pro, long before Geforce 5800 ultra) and also delivering optimization recommendations for their chip.

To bear two things in mind: Performance is better on Radeons. The Geforce-chips have less raw-power and are also susceptible for efficiency collapses when Cine-FX-Recommendations are violated. Radeons are quite egged :???: was war gemeint? because of the very simple optimization recommendations (and the fact, that some thing are common with pixelshader 1.4). If the Geforce-FX-shader now becomes 60% faster they would get ahead of the Radeons. But that can not be truth because the raw power of the Geforce is less.

To explain where the 60 % came from: Nvidia delivers a document for the "Unified Shader Compiler":

"The NVIDIA unified compiler technology efficiently translates these operands into the order that maximizes execution on NVIDIA GPUs—texture, texture, math, math. This one compiler feature can deliver a 60 percent performance improvement for DirectX 9 applications, and points out how a minor programming difference can result in significant performance impact on programmable GPUs."

This is a new driver feature in the detonator 50 and above (now called forceware). Radeons catalyst also optimizes shader programs. For Geforce-FX-cards it is quite more important. And indeed, the new driver delivers about 10 to 20 percent more performance in DirectX9 games without reducing the image quality. In synthetic clean shader benchmarks you can achive 100 percent more performance (dependant on optimization of the raw material). 100 % more performance compared to the old driver, not compared to the Radeons. Also keep in mind, that this driver doesn't need DirectX9.1. It's fully functional with DirectC9.0.

Pixelshader 2.0 and 3.0 are defined in DirectX9.0. Shader 4.0 will be provided in DirectX next (DirectX 10). So there is no need for an "interim version". Extended features will be supported by caps and not by new shader-versions. Of course the development tools of DirectX get frequent updates. So, for developers there could be another build of DirectX, but for the end consumer it changes nothing even if there will be an DirectX9.1.

We fear, that a lot of manufacturers will come up with the idea to promote the shader 3.0 support as directx9.1 compliant, like also seen before in quite a lot roadmaps. Somebody associated directx9.1 with 60 % performance gains and spreaded the message. Unfortunately, lots of online and offline magazines wrote and write this stuff unproved and without any backround knowledge. There are quite some information about DirectX next, but Microsoft never said a word of releasing a DirectX9.1 runtime.

GeforceFX-users have to bury the hope of 60 % more performance, because the performance has to be available in hardware. There is a compiler profile for 2_X and additionally the recompiler in the forceware 50 drivers do what it can. In our opinion, Geforce and Radeon are egged :-/in case of shader performance.

Unless Nvidia improves the recompiler and uses FX12 for directx-pixelshader. FX12 calculating power is available in raw masses. Admittedly it's not easy to use FX12 when FP24 precision is recommended. directx does not permit it also.

Altogether, you can assume that big performance gains of the actual Radeon and Geforce architectures will not come through drivers or even out of a new DirectX-Release. Both manufacturers spend a lot of time, effort, manpower and subsequently money supporting a lot of games and developers because it's the only way to get more performance.

Thanks for demirug for giving us backround information about this stuff.

So, ich glaub das wärs. Da wo ein Fragekopf ist, war mir die intention des ursprünglich (deutschen) artikels nicht so klar. Hab den auch nihct im kopf.Ich bitte um erläuterung.
Gruß - Bibo1

GloomY
2004-01-25, 15:28:26
Original geschrieben von -|NKA|- Bibo1
Versteh den beabsichtigten Sinn nicht so ganz.Guck' meinen Vorschlag an.
Original geschrieben von -|NKA|- Bibo1
was war gemeint?Im Original heisst es: "Die Radeons sind beihnahe ausgereizt". Ich würde es mit "maxed out" übersetzen.

Ich mach' einfach mal weiter, denn es gibt imho noch ein paar andere Sachen auszubessern.

Original geschrieben von Mapel110
Developers (kein "the")normaly don't deliver pure HLSL code to the DirectX-runtime. (Bei solchen unbestimmten Dingen wie Code im Allgemeinen, d.h. wenn man keinen ganz bestimmten Code meint, lässt man den bestimmten Artikel weg). If they did so, an updated DirectX would be able to consider and to accelerate new hardware. Unfortunately, that is not the case. Developers don't like unveiling their work to others. So DirectX has to deal with the finished compilation binaries.

Nvidia is aware of the difficulty that virtually all older shaders are optimized (Original beachten!) for Radeon hardware. That was not an evil intension of the developers. ATI was just the first delivering a DirectX9 compliant hardware accelerator (Radeon 9700 /Pro) and also delivering optimization recommendations for their chips (long before Geforce FX 5800 Ultra).Mapel, du veränderst hier den Sinn. Das "lang bevor GF5800 Ultra" bezieht sich klar auf die Herausgabe von Optimierungsansätzen und nicht auf das Erscheinen der ersten DX9 kompatiblen Hardware.
Ließ dir das Original noch mal durch!
Original geschrieben von Mapel110
Let's bear two things in mind: Performance is better on Radeons. The Geforce chips have less raw-power and are also vulnerable to efficiency drops when CineFX recommendations are violated. The point is: Radeons are nearly maxed out because of the very conceptional optimization recommendations (and the fact that some things which were common practice with pixelshader 1.4 can also be found at 2.0). If the Geforce FX shaders now became 60% faster they would get ahead of the Radeons. But that cannot be because the base performance of the Geforce is lower.

To explain where the 60 % came from: Nvidia delivers a document for the "Unified Shader Compiler":

"The NVIDIA unified compiler technology efficiently translates these operands into the order that maximizes execution on NVIDIA GPUs—texture, texture, math, math. This one compiler feature can deliver a 60 percent performance improvement for DirectX 9 applications, and points out how a minor programming difference can result in significant performance impact on programmable GPUs."

This is about a new driver feature in the Detonator 50 (now called ForceWare) and above. Radeon's catalyst driver also optimizes shader programs. For GeforceFX cards it is quite more important. And indeed the new driver delivers about 10 to 20 percent more performance in DirectX9 games without influencing image quality. In clean syntetic shader benchmarks you can achive 100 percent more performance (depends on optimization of the "raw material"). 100 % more performance compared to the old driver, not compared to the Radeons. Also keep in mind, that this driver doesn't need DirectX9.1. It's fully functional with DirectX9.0.

Pixelshader 2.0 and 3.0 are defined within DirectX9.0. Shader 4.0 will be provided in "DirectX Next" (DirectX 10). There is simply no need for an "in-between version". Extended features will be supported by caps and not by new shader versions. Of course the development tools of DirectX are frequently updated. So for developers there could be another built of DirectX, but for the end consumer it changes nothing even if there will be a DirectX9.1.

mapel110
2004-01-26, 01:11:38
Original geschrieben von GloomY
Mapel, du veränderst hier den Sinn. Das "lang bevor GF5800 Ultra" bezieht sich klar auf die Herausgabe von Optimierungsansätzen und nicht auf das Erscheinen der ersten DX9 kompatiblen Hardware.
Ließ dir das Original noch mal durch!

"Nicht aus böser Absicht, sondern schlicht weil ATI mit der Radeon 9700 /Pro seinerzeit die allererste DirectX9-Hardware bot"
:???:



"und seinerzeit (und lange vor der GeForceFX 5800 /Ultra) entsprechend auf Radeon zugeschnittene Optimierungs-Empfehlungen herausgab."

trifft aber wahrheitsgemäss auf beides zu. hardware und optimierungs-empfehlungen. wenn mans ganz genau nimmt, dann hast du recht, aber falsch ist meine aussage nicht. :)

GloomY
2004-01-26, 10:55:03
Original geschrieben von mapel110
"Nicht aus böser Absicht, sondern schlicht weil ATI mit der Radeon 9700 /Pro seinerzeit die allererste DirectX9-Hardware bot"
:???:



"und seinerzeit (und lange vor der GeForceFX 5800 /Ultra) entsprechend auf Radeon zugeschnittene Optimierungs-Empfehlungen herausgab."

trifft aber wahrheitsgemäss auf beides zu. hardware und optimierungs-empfehlungen.Das ist doch irrelevant. Der Autor hat nur ersteres geschrieben. Es geht nicht darum, das zu schreiben was der Übersetzer sich vorstellt, sondern das was dort steht.
Original geschrieben von mapel110
wenn mans ganz genau nimmt, dann hast du recht, aber falsch ist meine aussage nicht. :) Doch, es sind - wie du schon richtig oben gezeigt hast - zwei Teile. Der erste sagt, dass ATi die erste DX9 Hardware hatte. Das ist Grund Nummer eins. Grund Nummer zwei kommt im zweiten Teil (beide Teile klar durch "und" getrennt): Die Herausgabe von Optimierungsempfehlungen. Und in diesem Zusammenhang steht die Klammer, in der steht, dass dies lange vor der Geforce 5800U geschehen ist. Diese Klammer hat nichts mit dem Vorhandensein von DX9 Hardware zu tun, so wie es deine Übersetzung suggeriert.

Selbst wenn Leonidas es damals sogar anders gemeint hätte, als er es geschrieben hat, dürfen wir das als Übersetzer nicht einfach ändern. Der Text soll nicht interpretiert sondern übersetzt werden. Der Sinn darf dabei nicht verändert werden, auch wenn es vielleicht trotzdem richtig wäre (Ati hat ja imho vor nV die ersten DX9 Grakas herausgebracht).

Original geschrieben von mapel110
We fear, that a lot of manufacturers will come up with the idea to promote the shader 3.0 support as DirectX9.1 compliant, as already seen in the near past on quite a lot of graphic chip roadmaps. Somebody associated DirectX9.1 with 60% performance gain and spread (to spread, spread, spread; nicht spreaded) this mazy message. Unfortunately, lots of online and print magazins copied and still copy this message unexamined. There is already some solid information about DirectX Next available, but Microsoft never said a word about releasing a DirectX9.1 runtime.

GeforceFX user have to bury the hope for 60% more performance, because the performance has to be available in hardware first. There is a compiler profile for 2_X in the DirectX9 development tools and additionally the recompiler in the Forceware 50 driver does whatever it can. In our opinion Geforce and Radeons can be regarded as virtually maxed out concerning shader performance.

Unless Nvidia improves the recompiler considerably and dares using FX12 hardware for DirectX pixelshader. FX12 calculating power is available in raw masses. Admittedly it is not easy to use FX12 when FP24 precision is required (recommended = empfohlen). It is also not permitted by DirectX.

aths
2004-01-27, 21:34:38
Original geschrieben von GloomY
Doch, es sind - wie du schon richtig oben gezeigt hast - zwei Teile. Der erste sagt, dass ATi die erste DX9 Hardware hatte. Das ist Grund Nummer eins. Grund Nummer zwei kommt im zweiten Teil (beide Teile klar durch "und" getrennt): Die Herausgabe von Optimierungsempfehlungen. Gemeint ist: Erste DX9-HW gab's von ATI. Implizit: Wenn der Entwickler die Performance optimiert, dann passt er die Shader an die Radeon an.

Zweitens gab es konkrete Optimierungsempfehlungen zuerst von ATI, zugeschnitten auf Radeon. Nvidia hatte hier drittens keine Chance, da sie keine HW anzubieten hatten. Der Entwickler hätte nicht einfach nur 2 Wochen warten müssen auf die FX, sondern etliche Monate. In der Zeit gewöhnte er sich an die Radeon-Eigenheiten. Der Leser soll schlussfolgern: Die ersten Shader-Benches sind insofern vollkommen für die Katz.
Original geschrieben von GloomY
Selbst wenn Leonidas aths :kicher:

GloomY
2004-01-29, 17:41:22
Der Rest noch:
Original geschrieben von mapel110
Albeit, you can assume that big performance gains of the current Radeon and Geforce architectures are not possible neither by drivers nor by new DirectX releases. That's why both manufacturers support more and more individual game projects and game developers because more performance on the current hardware can only be achieved by optimizing individual games (quite time and work intensive).

Thanks to Demirug for providing us with background information about this subject.
Original geschrieben von aths
Gemeint ist: Erste DX9-HW gab's von ATI. Implizit: Wenn der Entwickler die Performance optimiert, dann passt er die Shader an die Radeon an.

Zweitens gab es konkrete Optimierungsempfehlungen zuerst von ATI, zugeschnitten auf Radeon.Inhaltlich ist mir das schon klar.
Original geschrieben von aths
Nvidia hatte hier drittens keine Chance, da sie keine HW anzubieten hatten.Bloss steht das nicht so im Text drin, daher "weigere" ich mich das auch so zu übersetzen ;)

Wie oben schon mal erwähnt bezieht sich die Klammer mit der Aussage "und lange vor der GeForceFX 5800 /Ultra" klar auf die Herausgabe der Optimierungsempfehlungen. Dass das natürlich auch damit zusammenhängt, dass nV keine Hardware anzubieten hatte, will ich ja gar nicht abstreiten. Es steht im Original bloss so nicht da.
Original geschrieben von aths
Der Entwickler hätte nicht einfach nur 2 Wochen warten müssen auf die FX, sondern etliche Monate. In der Zeit gewöhnte er sich an die Radeon-Eigenheiten. Der Leser soll schlussfolgern: Die ersten Shader-Benches sind insofern vollkommen für die Katz.Klar.
Original geschrieben von aths
aths :kicher: Hmm, achso ;)

GloomY
2004-01-29, 18:10:52
Das ganze jetzt mal in der Übersicht. Anregungen sind weiterhin willkommen:

When will DirectX 9.1 arrive?

To make it clear right from the start, DirectX9.1 will most likely never be released. But there is the persistent rumour that DirectX 9.1 will boost performance for nVidia cards up to 60%. We will now anaylse this rumour.

First of all: Would this be possible? Clear answer: No. Though the pixelshader of the FX series has quite a lot more functionality than the hardware of the current Radeons, it is only capable of executing less operations per clock cycle. If the shader program was coded in a radeon-optimized format, the CineFX architecture will additionally be slowed down. The way instructions are coded in favour of the Radeons causes execution interruptions on Nvidia hardware. The Radeons are not so vulnerable, admittedly it's because of their simpler shader implementation.

In order to make the CineFX functionality powerful at all, NVidia had to make trade-offs for the design. Complying with the complex optimization rules for FX shaders, you may in some cases earn more performance than on Radeon hardware. Apart from special cases, the Radeon stays on top regarding overall performance. Nvidia can only compensate with higher clockspeeds.

A shader program to be executed by graphic cards exists the way it is. DirectX 9.1 can't do anything about that. Shader programs are nowadays likely to be written in a "high level shading language" (HLSL). With help of the DirectX Development Kit a compiler translates the program into the "pixelshader language". Meanwhile, there is a shader profile called 2_A available (which is optimized for the 2_X shaders and thus for Geforce FX) and of course there is the 2_0 shader profile.

That means that the additional work for developers supporting two architectures is kept within a limit. The same source code has to be compiled twice and the game has to detect which graphics card is in the computer in order to select the best shader code. The current DirectX Development Environment provides functions to determine which profile is the best for the running hardware.

Developers usually don't deliver pure HLSL code to the DirectX runtime. If they did so, an updated DirectX would be able to consider and to accelerate new hardware. Unfortunately, that is not the case. Developers don't like unveiling their work to others. So DirectX has to deal with the finished compilation binaries.

Nvidia is aware of the difficulty that virtually all older shaders are optimized for Radeon hardware. That was not an evil intension of the developers. ATI was just the first delivering a DirectX9 compliant hardware accelerator (Radeon 9700 /Pro) and also delivering optimization recommendations for their chips (long before Geforce FX 5800 Ultra).

Let's bear two things in mind: Performance is better on Radeons. The Geforce chips have less raw-power and are also vulnerable to efficiency drops when CineFX recommendations are violated. The point is: Radeons are nearly maxed out because of the very conceptional optimization recommendations (and the fact that some things which were common practice with pixelshader 1.4 can also be found at 2.0). If the Geforce FX shaders now became 60% faster they would get ahead of the Radeons. But that cannot be because the base performance of the Geforce is lower.

To explain where the 60 % came from: Nvidia delivers a document for the "Unified Shader Compiler":

"The NVIDIA unified compiler technology efficiently translates these operands into the order that maximizes execution on NVIDIA GPUs—texture, texture, math, math. This one compiler feature can deliver a 60 percent performance improvement for DirectX 9 applications, and points out how a minor programming difference can result in significant performance impact on programmable GPUs."

This is about a new driver feature in the Detonator 50 (now called ForceWare) and above. Radeons' catalyst driver also optimizes shader programs. For GeforceFX cards it is quite more important. And indeed the new driver delivers about 10 to 20 percent more performance in DirectX9 games without influencing image quality. In clean syntetic shader benchmarks you can achive 100 percent more performance (depends on optimization of the "raw material"). 100 % more performance compared to the old driver, not compared to the Radeons. Also keep in mind, that this driver doesn't need DirectX9.1. It's fully functional with DirectX9.0.

Pixelshader 2.0 and 3.0 are defined within DirectX9.0. Shader 4.0 will be provided in "DirectX Next" (DirectX 10). There is simply no need for an "in-between version". Extended features will be supported by caps and not by new shader versions. Of course the development tools of DirectX are frequently updated. So for developers there could be another built of DirectX, but for the end consumer it changes nothing even if there will be a DirectX9.1.

We fear, that a lot of manufacturers will come up with the idea to promote the shader 3.0 support as DirectX9.1 compliant, as already seen in the near past on quite a lot of graphic chip roadmaps. Somebody associated DirectX9.1 with 60% performance gain and spread this mazy message. Unfortunately, lots of online and print magazins copied and still copy this message unexamined. There is already some solid information about DirectX Next available, but Microsoft never said a word about releasing a DirectX9.1 runtime.

GeforceFX user have to bury the hope for 60% more performance, because the performance has to be available in hardware first. There is a compiler profile for 2_X in the DirectX9 development tools and additionally the recompiler in the Forceware 50 driver does whatever it can. In our opinion Geforce and Radeons can be regarded as virtually maxed out concerning shader performance.

Unless Nvidia improves the recompiler considerably and dares using FX12 hardware for DirectX pixelshader. FX12 calculating power is available in raw masses. Admittedly, it is not easy to use FX12 when FP24 precision is required. It is also not permitted by DirectX.

Albeit, you can assume that big performance gains of the current Radeon and Geforce architectures are not possible neither by drivers nor by new DirectX releases. That's why both manufacturers support more and more individual game projects and game developers because more performance on the current hardware can only be achieved by optimizing individual games (quite time and work intensive).

Thanks to Demirug for providing us with background information about this subject.

mapel110
2004-01-29, 18:23:13
Original geschrieben von GloomY
When will DirectX 9.1 arrive?

To mention it from the start: DirectX9.1 will most likely never be released. But there is the persistent rumour that DirectX 9.1 will boost performance for nVidia cards up to 60%. We will now anaylse this rumour.


:)

ansonsten gefällts mir so recht gut.

GloomY
2004-01-29, 18:42:31
Original geschrieben von mapel110
:)

ansonsten gefällts mir so recht gut. Hmm,

there is / there are = es gibt; es existiert

Passt doch, oder? :)

aths
2004-01-29, 19:33:24
Da steht im englischen Text nicht is oder are, sondern ist.

GloomY
2004-01-29, 20:27:17
Original geschrieben von aths
Da steht im englischen Text nicht is oder are, sondern ist. Ah, hab' ich grad' übersehen :bonk: :D

Ich bessere es aus. :)

VoodooJack
2004-01-29, 23:24:43
Gloomy sagt, Anregungen sind weiterhin willkommen.

Statt "To mention it from the start" würde ich "First off, ..." vorschlagen. Klingt irgendwie eleganter.

GloomY
2004-02-05, 22:21:38
Original geschrieben von VoodooJack
Gloomy sagt, Anregungen sind weiterhin willkommen.

Statt "To mention it from the start" würde ich "First off, ..." vorschlagen. Klingt irgendwie eleganter. Gegen den Ausdruck habe ich nichts (weil ich ihn nicht kenne ;) ), aber einen Absatz weiter unten heißt es "First of all:". Beides zusammen klingt nicht so gut.

Vielleicht macht ja jemand anders noch einen weiteren Vorschlag...

VoodooJack
2004-02-06, 02:31:02
Original geschrieben von GloomY
Gegen den Ausdruck habe ich nichts (weil ich ihn nicht kenne ;) ), aber einen Absatz weiter unten heißt es "First of all:". Beides zusammen klingt nicht so gut.

Vielleicht macht ja jemand anders noch einen weiteren Vorschlag...

Stimmt. So kurz hintereinander klingt es auch nicht gut.

Was sagst'n dazu?

To make it clear right from the start, DirectX9.1 will ...


Und noch etwas, im 2. Absatz 2. Zeile sollte das "but" raus.

GloomY
2004-02-06, 03:44:46
Original geschrieben von VoodooJack
Stimmt. So kurz hintereinander klingt es auch nicht gut.

Was sagst'n dazu?

To make it clear right from the start, DirectX9.1 will ...Hört sich sehr gut an =)
Original geschrieben von VoodooJack
Und noch etwas, im 2. Absatz 2. Zeile sollte das "but" raus. Ja, das stimmt, ich editiere es raus.

Leonidas
2004-02-19, 21:21:27
Thx!
http://www.3dcenter.org/artikel/2004/01-10_english.php