Jump to content
IGNORED

PlayStation 4 Console Thread


mushashi

Recommended Posts

It's likely to be publisher led. So, some titles will be plagued with always online/ DRM and some won't.

I think this is likely. Sony are going even more wild west this gen with self publishing and the like,

Also, nicked from the XBone thread,

We asked Avalanche’s chief technical officer Linus Blomberg how the two consoles compare. “It’s difficult to say, as it’s still early days when it comes to drivers,” he told us. “With each new driver release, performance increases dramatically in some areas.

The PlayStation 4 environment is definitely more mature currently, so Microsoft has some catching up to do. But I’m not too concerned about that as they traditionally have been very good in that area. The specs on paper would favour the PS4 over the Xbox One in terms of raw power, but there are many other factors involved so we’ll just have to wait and see a bit longer before making that judgment.”

Link to comment
Share on other sites

At least if things are publisher led we can give them shit about it. With Xbox One that's just how the system works*, no changing it. When such features are elective the onus is then on the publisher to justify why things are as they are, rather than just deferring to the platform holder.

Personally I'm expecting the PS4 to have some system-level feature for publishers to get a cut of second-hand games.

*Probably. Not even Microsoft seems to know.

Link to comment
Share on other sites

Personally I'm expecting the PS4 to have some system-level feature for publishers to get a cut of second-hand games.

They can make money from second hand games by selling DLC, or hats and pants, just like they have this gen; I'm pretty sure that's how PSN+ justifies itself to publishers who offer up their games and hopefully, this is how it works with PS4.

Link to comment
Share on other sites

Hey, I didn't say it was a good idea; trying to scupper second-hand seems pointless to me, as it's a market that will inevitably disappear in the future. Companies should be looking to the carrot, not the stick.

Fact is Sony is quite the fan of online passes themselves, so I wouldn't be surprised to see base level functionality that makes such systems more seamless for the original purchaser; no faffing about with codes, basically.

Link to comment
Share on other sites

Hey, I didn't say it was a good idea; trying to scupper second-hand seems pointless to me, as it's a market that will inevitably disappear in the future. Companies should be looking to the carrot, not the stick.

(I didn't say you did (did I? didn't meant it if I did!))

Exactly, they should be 'nudging' people into paying for second hand games in other ways, rather than NO PAY NO PLAY.

PS4 = 8 Atari Jaguars duct-taped together. Confirmed.

Tempest 2000 BC confirmed!

Link to comment
Share on other sites

The PS4 isn't twice as powerful as the Xbox One; it's at most 1.5 times more powerful, in some measures, with some caveats. So to talk about 30 FPS vs 60 FPS is rather optimistic.

I could see the PS4's greater performance leading to higher and more stable frame rates, however. Same as happened this generation to (generally) favour the 360. It's where leftover performance will go if the devs don't do anything specific to use it, after all.

Funnily enough, one of the people who had sauce on NeoGAF seems to have been travelling on the road to Damascus recently:

Originally Posted by Proelite: viewpost.gif
More rops, more bandwidth, more cu, no hoops to jump through, no vm overhead results in twice the framerate for third parties. Sounds reasonable.

It's weird that you woke up and realized what you were smoking earlier;

Originally Posted by Proelite: viewpost.gif
Having 33% more flops is only a slight difference in comparison to past console wars. The Xbox had a nearly a 200% advantage over the gamecube. The PS3, had on paper, had a 67% advantage over the 360.

The ram advantage for PS4 is far more worrying than a slight perceived GPU advantage.

Originally Posted by artist: viewpost.gif
It's weird that you woke up and realized what you were smoking earlier;

Well, if both console had the same os / subsystem, the difference would be minimal.

3GB OS, VMs, and OS using up cpu / Gpus resources at all times can be more really cumbersome.

Link to comment
Share on other sites

If the method of using GDDR3 plus eSRAM is just as quick as GDDR5, then why aren't all the GFX card manufacturers doing it?

In addition to my earlier reply. PC gfx cards don't need ultra low latency cache, that's what the CPU likes and PC CPUs have great big globs of the stuff. That's why I think Sony could have issues around latency as the entire system is DDR5 with only a very small cache for the CPU.

Link to comment
Share on other sites

In addition to my earlier reply. PC gfx cards don't need ultra low latency cache, that's what the CPU likes and PC CPUs have great big globs of the stuff. That's why I think Sony could have issues around latency as the entire system is DDR5 with only a very small cache for the CPU.

AMD's PC variants of the HSA idea use bog standard DDR3 for their video RAM and it shows in the performance, worse than what you could get from a dedicated card, and as the testing shows, faster higher latency RAM results in better overall performance in games compared to lower but slower:

khVKQtV.png

Is the extra latency likely to be a big problem? who knows, if you know where the bottlenecks are, you can develop with them in mind to hide any problems they might cause.

Link to comment
Share on other sites

Then where do you believe these latency problems are going to occur?

When the CPU needs to access memory.

If the CPU needs to read data from memory and it isnt in the tiny cache then you have a lot of CPU cycles of nothing while it waits for that data to be read. I saw a decent analogy the other day, the difference between sRAM cache and DDR5 memory is like getting something from the cupboard in your kitchen, or having to go down the supermarket,

Link to comment
Share on other sites

When the CPU needs to access memory.

If the CPU needs to read data from memory and it isnt in the tiny cache then you have a lot of CPU cycles of nothing while it waits for that data to be read. I saw a decent analogy the other day, the difference between sRAM cache and DDR5 memory is like getting something from the cupboard in your kitchen, or having to go down the supermarket,

That's in a PC, you don't think any of the changes Sony and Mark Cerny have made (and lots of traditional CPU workload will be done on the GPU*) are going to mitigate that?

One barrier to this in a traditional PC hardware environment, he said, is communication between the CPU, GPU, and RAM. The PS4 architecture is designed to address that problem.

"A typical PC GPU has two buses," said Cerny. "There’s a bus the GPU uses to access VRAM, and there is a second bus that goes over the PCI Express that the GPU uses to access system memory. But whichever bus is used, the internal caches of the GPU become a significant barrier to CPU/GPU communication -- any time the GPU wants to read information the CPU wrote, or the GPU wants to write information so that the CPU can see it, time-consuming flushes of the GPU internal caches are required."

The three "major modifications" Sony did to the architecture to support this vision are as follows, in Cerny's words:

"First, we added another bus to the GPU that allows it to read directly from system memory or write directly to system memory, bypassing its own L1 and L2 caches. As a result, if the data that's being passed back and forth between CPU and GPU is small, you don't have issues with synchronization between them anymore. And by small, I just mean small in next-gen terms. We can pass almost 20 gigabytes a second down that bus. That's not very small in today’s terms -- it’s larger than the PCIe on most PCs!

"Next, to support the case where you want to use the GPU L2 cache simultaneously for both graphics processing and asynchronous compute, we have added a bit in the tags of the cache lines, we call it the 'volatile' bit. You can then selectively mark all accesses by compute as 'volatile,' and when it's time for compute to read from system memory, it can invalidate, selectively, the lines it uses in the L2. When it comes time to write back the results, it can write back selectively the lines that it uses. This innovation allows compute to use the GPU L2 cache and perform the required operations without significantly impacting the graphics operations going on at the same time -- in other words, it radically reduces the overhead of running compute and graphics together on the GPU."

Thirdly, said Cerny, "The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands -- the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that's in the system."

And unless they've made a terrible mistake engineering the PS4, then where do you believe the latency problems will occur? 20GB/s is a lot of data for the PS4 CPU is it not? so what instance can you see where it will be sitting around waiting for the delivery van to fill its cupboard?

I know very little about this, so don't think I'm having a go - I'm just generally interested in how this works!

Most people i saw in B3D Forums agree that the supposedly High latency issue on GDDRs is a rather insignificant.

Any links that a layman can understand?

* Cerny envisions "a dozen programs running simultaneously on that GPU" -- using it to "perform physics computations, to perform collision calculations, to do ray tracing for audio."

Link to comment
Share on other sites

When the CPU needs to access memory.

If the CPU needs to read data from memory and it isnt in the tiny cache then you have a lot of CPU cycles of nothing while it waits for that data to be read. I saw a decent analogy the other day, the difference between sRAM cache and DDR5 memory is like getting something from the cupboard in your kitchen, or having to go down the supermarket,

I think "a lot of CPU cycles" is an exaggeration. AFAIK you'd need to be slinging some extremely large amounts of data back and forth from the CPU pretty much constantly across all your threads to wind up with your game bound by the CPU transfer limit.
Link to comment
Share on other sites

Cerny is talking there about the ability of the GPU and CPU to access the same piece of memory, which will help with GPGPU performance. As for bandwidth, yes they have a huge pipe, but the tap takes a while to be opened. This time around I think MS will have better CPU performance, but nowhere near enough for it to make up for the huge GPU advantage the PS4 has.

I still think Sony have the better solution, I just don't think what MS have is as terrible as the Internet seems to think.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. Use of this website is subject to our Privacy Policy, Terms of Use, and Guidelines.