Jump to content

Discuss the future of game graphics


NecroMorrius
 Share

Recommended Posts

  • 1 month later...

CGI quality stuff in realtime, and yet I still found that a bit underwhelming. Also, even with all that horsepower we're still very much in the uncanny valley. Wonder what it will take to get to the other side.

Link to comment
Share on other sites

CGI quality stuff in realtime, and yet I still found that a bit underwhelming. Also, even with all that horsepower we're still very much in the uncanny valley. Wonder what it will take to get to the other side.

6 Titan X's.
Link to comment
Share on other sites

Square Enix have a FFXV DX12 realtime demo. Jump to 1:30 for the good stuff to start.

4 Titan X's.... (increase the performance of a PS4 by an order of magnitude then double it :lol:)

The talk about the hair made me do the Alan Partridge shrug in real life. I am sure on a technical level it is amazing, as a consumer I could not care less.

alan-partridge-shrug-o.gif

Link to comment
Share on other sites

  • 1 month later...
  • 4 months later...
  • 1 month later...

I'm interested in graphics from somewhat uncommon (here, certainly) PoW - in PvP MMORPGs with hundreds of players on screen, even the most optimised engines (that is, with graphics dynamically scaled back to previous century) will struggle not to crush the server. I wonder whether I will live to see graphical ceiling being hit in such circumstances, having maybe 40 or 50 years left. I'm optimistic thou.

Link to comment
Share on other sites

  • 2 months later...

GDC is happening, and some tech related presentations have been done by various leading companies. Nvidia and AMD co-hosted a session talking about DirectX 12, all the fun of console programming, but now on PC (so the average small time indie will be sticking with DX11 for a while longer, which isn't being dropped):


 

Quote

 

DirectX 12 is for those who want to achieve maximum GPU and CPU performance, but there’s a significant requirement in engineering time, as it demands developers to write code at a driver level that DirectX 11 takes care of automatically,. For that reason, it’s not for everyone.

 

The use of root signature tables is where optimization between AMD and Nvidia diverges the most, and developers will need brand-specific settings in order to get the best benefits on both vendors’ card.

 

It’s important for developers to keep in mind the limitations in bandwidth of different version of PCI (the interface between motherboard and video card), as PCI 2.0 is still common, and grants half the bandwidth of PCI 3.0.

 

 

http://www.dualshockers.com/2016/03/14/directx12-requires-different-optimization-on-nvidia-and-amd-cards-lots-of-details-shared/

 

Also Remedy have a presentation about how they ported their game engine to DX12:

 

http://wili.cc/research/northlight_dx12/GDC16_Timonen_Northlight_DX12.pptx

Link to comment
Share on other sites

  • 4 months later...

A technological inevitability really. Film directors can already shoot scenes using a "virtual camera", but because the scene's too computationally expensive so they look at a pre-viz environment rather than the finished shot. In the case of a game, the scene is computationally cheap enough to run in real-time so they can use that for the virtual camera. Kojima did something similar for the Death Stranding trailer, and I expect most game directors who lean on performance capture will move over to it.

Link to comment
Share on other sites

  • 1 month later...
  • 2 weeks later...
  • 1 year later...

I've just ordered a Sony BVM, and to go against the grain I won't be playing 240p stuff right away (No SNES or AES yet, and I'm not sold on Wii emu), but rather things like F-Zero GX and the Prime Trilogy in spanking 480p. So I've just had an interesting thought... Where would we be with graphics if we were still stuck on SD displays? Imagine what an XB1X or a GTX1080ti could do if forced to peak at 640x480.

Link to comment
Share on other sites

  • 5 months later...

It's finally happened.  I really don't give a shit about graphics any more. 

 

Like, that looks amazing.  But man.  Industry been chasing the graphics dragon for a long time, and all it's really meant is that consumers spend on hardware more frequently, as opposed to giving us much in the way of truly new experiences / ideas. 

Link to comment
Share on other sites

Nvidia and Microsoft have announced their Next Big Thing, real-time raytracing for DirectX 12, DXR. Some fairly well known devs have been playing with it for a while, and it'll only really work well on Nvidia's next generation of GPUs:

 

https://blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/

 

https://www.anandtech.com/show/12547/expanding-directx-12-microsoft-announces-directx-raytracing

 

Quote

For today’s reveal, NVIDIA is simultaneously announcing that they will support hardware acceleration of DXR through their new RTX Technology. RTX in turn combines previously-unannounced Volta architecture ray tracing features with optimized software routines to provide a complete DXR backend, while pre-Volta cards will use the DXR shader-based fallback option. Meanwhile AMD has also announced that they’re collaborating with Microsoft and that they’ll be releasing a driver in the near future that supports DXR acceleration. The tone of AMD’s announcement makes me think that they will have very limited hardware acceleration relative to NVIDIA, but we’ll have to wait and see just what AMD unveils once their drivers are available.

 

Quote

 

While practical considerations mean that rasterization has – and will continue to be – the dominant real-time rendering technique for many years to come, the holy grail of real-time graphics is still ray tracing, or at least the quality it can provide. As a result, there’s been an increasing amount of focus on merging ray tracing with rasterization in order to combine the strengths of both rendering techniques. This means pairing rasterization’s efficiency and existing development pipeline with the accuracy of ray tracing.

 

While just how to best do that is going to be up to developers on a game-by-game basis, the most straightforward method is to rasterize a scene and then use ray tracing to light it, following that up with another round of pixel shaders to better integrate the two and add any final effects. This leverages ray tracing’s greatest strengths with lighting and shadowing, allowing for very accurate lighting solutions that properly simulate light reflections, diffusion, scattering, ambient occlusion, and shadows. Or to put this another way: faking realistic lighting in rasterization is getting to be so expensive that it may just as well be easier to do it the right way to begin with.

 

 

 
 

 

 

 
 

 

Link to comment
Share on other sites

So how does the gap close between the computational expense of using ray-tracing, and the programmer effort you have to put in when you fake it?

 

1) Start using ray-tracing for situations that are difficult to fake convincingly (water, glass, fire)

 

2) ???

 

3) Cheaper to ray-trace everything than use special techniques to fake each individual case

 

2 would be something like, the expectation for the quality of the fake goes up, or the number of things that have to be faked goes up, something like that?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Use of this website is subject to our Privacy Policy, Terms of Use, and Guidelines.