Jakal said:This chip looks like an excellent addition to aid performance. If it works well, it'll tremendously reduce cpu usage for physical interactions within the game. That'll mean higher framerates and better graphics. I love it!
Jakal said:This chip looks like an excellent addition to aid performance. If it works well, it'll tremendously reduce cpu usage for physical interactions within the game. That'll mean higher framerates and better graphics. I love it!
DarkSynergy said:Anode,
This, I hope, just wont be the case with the PhysX PPU. The approach that nVidia and ATi are taking at the moment is completely different from the direction Ageia is leaning. They want to offload the physics load onto one of the GPUs but as stated earlier, I spoke with some game designers at the GDC and they were not to keen to the idea of giving up GPU performance to code for physics but rather liked the way that Ageia is going with a stand alone card. I, as a gamer and end users, don't really want to sacrifice any of my SLi performance to incorporate physics. Now I can understand how nVidia and ATi are justifying this to themselves because not everyone will be able to afford a PPU when they first ship so the idea of being able to have a lot of the performance without dropping another 300 on the PhysX card will be appealing. That just isn’t the case for me and it sounds like you. We are tech heads and want what’s new while it's still new...
Thanks for your input and feel free to stick around.
Unregistered said:actually if i read the article right ageia is giving developers the license!! and making the money off the cards.
yes dual processors are becoming popular but games are getting much much more complex.
i have been jumping around the game forums and alot of people are complaining they want better ai and bigger maps and more players per map.
and if you go to the age 3 web forums there is a thread called ask sandy a developer and this was asked,
16) will there be much much much larger maps in the expansion pack
A – only if we find out suddenly that everyone’s computers are much much much more powerful. Unlike my answers, the map sizes were not chosen arbitrarily
http://forum.agecommunity.com/ibb/posts.aspx?postID=179228&postRepeater1-p=3
i will take a seperate card!!!!!
Buck-O said:Good greif, go you work for Agia...or a advertising company that is contracted out for Agia? Im sorry, but i see very little relavence to this post, and even less actual information that refutes any of the claims ive made. All i see happy play spin on a product that doesnt doesnt live up to the hype that marketing is bringing it. As i said in my initial post, 90% of the unregistered post in this thread sound fishy, WAY to positive, and i think are highly suspect for being plants. And i think Rob should do everything in his aibility to investigate those posts, becuase as far as im conscerned, they are bogus.
But, ill certaintly take on every point youve attempted to make.
Great, thats fantastic that they are handing over their physics engine for free. Its about the only way any developer would be stupid enough to code for it. But even to that end. Getting handed an SDK for free, probibly comes with alot of marketing dollars behind it. Becuase, again, the PhysX card does not make sence.
Either way, by virture of handing out the SDK for free, and by making such a large public spectacle out of it, if a company where to produce a PROPER SMP bassed physics engine that could fully utilize a dual core, and make better use of resources then a PhysX card could (which again wouldent be difficult given the speed of the PCI bus), Agia would have a lawsuit all over their asses for patent infringment, and copying their intelectual property. So really, in the end, all Agia will turn out to be, is a marketing company with a few big patents for a technology that is outdated by current modern hardware standards, that could easily be coded around.
The best bit, was that you debunked your last point, in the answer you gave for why it was valid.
Adding a PhysX card to your computer, wont add any benefit to allow for larger levels, or more players or for wider expansiveness of the levels. His answer... only if we find out suddenly that everyone’s computers are much much much more powerful".
Simply ploping in a PhysX card, DOES NOTHING TO REMEDY THIS! All it could do is take advantage of some poorly coded eye candy, that your video card, may or may not be able to render on screen fully at a reasonable frame rate.
Becuase, as he said, the reasoning behind it is the power of the computer. Not the video card, not the PhysX card...the WHOLE COMPUTER. If youve only got 512 megs of RAM, a PhysX card wont make Battlefield 2 run any better on your system.
If you have an old Athlon XP 1800+, Quake4 will ot run any faster or look any better with a PhysX card.
In fact, the only performance segment where a PhysX card would be worth its while, is in HIGH END systems. And most lowe level high end systems, employ dual-core CPUs. And which point, proper coding, would make the PhysX card completely worthless. Becuase the data could be calculated faster, sent to the video card quicker, and run will less overall system overhead (no crappy legacy PCI bus constraint), by useing SMP to dedicate the dual core to nothing but physics calculation.
But even then, the game has to be coded around the lowest common denominator, with eye candy thrown onto it, for those with greater horspower behind them. This has been, and always will be the modus operandi of developers around the world. And the PhysX card will never change that. Reguardless of how much their marketing might try to make you believe otherwise.
Not like it makes any difference though, considering all of this is just falling on def ears anyway.
DarkSynergy said:Yes!
As will I. There will have to be a great game, or 2-3, for the launch to take off but I feel that Ageia's separate slot approach is heading down the right path.
And also, you have read correct. The SDK is being given away for all to design games and the money will be made on the cards themselves. They are somehow at the mercy of how well the games are designed to take advantage of the PPU but they have some heavy hitters on board with them so I am not worried to much about that.
Give me my dedicated card and keep the GPU clock cycles to the games!
You can argue that dual core will be the untimely coup de gras of the PPU but I feel that the more game programmers learn to code with dual cores, the more they will take advantage of the fact that they can either code the game to use one core for gaming and one for physics or both for the game and none for physics. You can argue that you only need one core for in game physics but I can argue that with a dedicated PPU, the entire processing power of the CPU can be used to the full advantage of the gaming end user. We could both be wrong but only time will tell. Stay tuned as I have a PPU on the way for review and I will let you all know what is found out. If it sucks in real time, then I will let you know but if it rules, you will know that as well.
Buck-O said:Well at least this is a post from a respected user...
But again, i fail to see the merit.
"I feel that Ageia's separate slot approach is heading down the right path."
How do you figure? Like i said, its like adding somthing extra that doesnt need to be there. And a technology that is two years late, and $200 short of being impressive.
"Give me my dedicated card and keep the GPU clock cycles to the games!"
Im assuming this is a freudian slip, and you actually mean CPU. The GPU does nothing in terms of physics rendering for the game. Thats all handled and loaded to teh CPU. However, the dirty little secret here, is that any and all data that is processed by the PhysX card, still has to be rendered by the GPU. If your GPU cant hack it, you still wont see any marked improvment in gameplay. All you will be is a frustrated consumer who just wasted $200 on somthing he cant get the full benefit of.
"You can argue that dual core will be the untimely coup de gras of the PPU but I feel that the more game programmers learn to code with dual cores, the more they will take advantage of the fact that they can either code the game to use one core for gaming and one for physics or both for the game and none for physics."
Again, they have to program to lowest common denominator. Making the effectivness of coding for dual core AND PhysX, very dubious at best. And as it stands right now, the ise of Dual core implimentation in games has been very poor. Becuase the dedicated offloading to an individual CPU has been very limited bassed on how the games kernel can support multiple threads. Considering that PhysX is essentially a seperate entity, offloading that to a secondary CPU would be very simple to impliment, even at a user level, withen windows. Infact, i wouldent at all be suprised if in the coming months as Agia supported games are released, we see hacks that allow the PhysX computations to be offloaded directly to a seconed core, in its own thread.
"Stay tuned as I have a PPU on the way for review and I will let you all know what is found out. If it sucks in real time, then I will let you know but if it rules, you will know that as well"
Believe me, im ripe with anticipation. However, i want to see results on real games, with real advantages, with no special demos to oooh and ahhh at. Cause right now, the marketing is about the only thing i see.