Log in

View Full Version : CPU Multithreading please.


zanzark
10-01-2011, 03:57 PM
I'm playing COD right now, at an unconfortable 22-30fps.

My GPU is a 6950@920MHz 2GB, BUT the game is using only 66% of the GPU capability. (As reported by GPU-Z and AtiTrayTools)
I had a 6850 1GB before and I had no FPS improvement when I got the new card.

My CPU is a Phenom 1090T X6, but the game uses only 1.5 core at most.

So I assumed the Sim is lagging at the CPU, as it's uncapable of using more than 1.5 core.

I put the plane in autopilot over the sea and logged the FPS:
CPU@3200MHz = 29FPS
CPU@3700MHz(+15%) = 33FPS (+14%)
CPU@4100MHz(+28%) = 37FPS (+27%)

So, the big problem is the CPU and the fact that the main thread runs on only 1 core.
If they could mutithread it, everybody using 2+ core CPU would have a massive FPS improvement.

LoBiSoMeM
10-01-2011, 06:24 PM
I'm playing COD right now, at an unconfortable 22-30fps.

My GPU is a 6950@920MHz 2GB, BUT the game is using only 66% of the GPU capability. (As reported by GPU-Z and AtiTrayTools)
I had a 6850 1GB before and I had no FPS improvement when I got the new card.

My CPU is a Phenom 1090T X6, but the game uses only 1.5 core at most.

So I assumed the Sim is lagging at the CPU, as it's uncapable of using more than 1.5 core.

I put the plane in autopilot over the sea and logged the FPS:
CPU@3200MHz = 29FPS
CPU@3700MHz(+15%) = 33FPS (+14%)
CPU@4100MHz(+28%) = 37FPS (+27%)

So, the big problem is the CPU and the fact that the main thread runs on only 1 core.
If they could mutithread it, everybody using 2+ core CPU would have a massive FPS improvement.

+1000000. Devs need to work towards this direction. It's hard work, i know, but the time is now...

335th_GRAthos
10-01-2011, 06:41 PM
and your setting of ProcessAffinityMask= is?????

zanzark
10-01-2011, 09:08 PM
and your setting of ProcessAffinityMask= is?????

What difference does it make?
AffinityMask is only to choose which core you want to use, not to use more than one.

335th_GRAthos
10-01-2011, 11:10 PM
What difference does it make?
AffinityMask is only to choose which core you want to use, not to use more than one.

Says who!? Totally wrong.

Process Affinity =
=1 - core 0
=2 - core 1
=3 - core 0+1
=4 - core 2
=5 - core 0+2
=6 - core 1+2
=7 - core 0+1+2
=8 - core 3
=9 - core 0 + 3
=10 - core 1 + 3
=11 - core 0 + 2 + 3
=12 - core 2 + 3
=13 - core 0 + 2 + 3
=14 - core 1 + 2 + 3
=15 - core 0 + 1 + 2 + 3


PS. and make sure that you remove the ; sign in front of the processaffinity= line in your conf.ini (just in case)


~S~

CWMV
10-01-2011, 11:50 PM
Now the question is does that actually work?
I know that in IL2 regardless of the above assignment it would only use 1 core.
Has multicore support been built into CoD?
I guess Ill fire it up and check out my core usage. We shall see...

LoBiSoMeM
10-02-2011, 12:07 AM
Here in my default conf.ini i don't have this setting anymore.

All setings i use now, same result: Launcher.exe spread over my 4 cores.

zanzark
10-02-2011, 03:01 AM
Says who!? Totally wrong.

Process Affinity =
=1 - core 0
=2 - core 1
=3 - core 0+1
=4 - core 2
=5 - core 0+2
=6 - core 1+2
=7 - core 0+1+2
=8 - core 3
=9 - core 0 + 3
=10 - core 1 + 3
=11 - core 0 + 2 + 3
=12 - core 2 + 3
=13 - core 0 + 2 + 3
=14 - core 1 + 2 + 3
=15 - core 0 + 1 + 2 + 3


PS. and make sure that you remove the ; sign in front of the processaffinity= line in your conf.ini (just in case)


~S~


Sorry, I assumed that because that's how it usually works in linux daemons.

My conf.ini does not have that option...

Well I tried using it, and the difference I noticed is that now instead of having 1 full used core, and half another, I got 4 evenly distributed cores at 30% load.

So, as I imagined, the game itself runs on only 1 thread, there is no way to make it use more right now.

I think there are additional threads for terrain loading, but that doesn't improve your normal FPS while there is no terrain being loaded.

NedLynch
10-02-2011, 04:46 AM
It is my understanding that process affinity was originally included in the config but has been removed quite a while ago with subsequent patches.

At some point I added the line back in with a value of 15 and performance was not only not better it was worse on my comp.

I do agree with the game benefitting from increased cpu performance/overclocking, I have the same experience here with a much better fps rate when I run the game with my cpu overclocked ( right now up to 4.1ghz ). At the same time game performance imo is as well influenced by how the ram is configured upon cpu overclocking.
My so far best playability I think I get when I have a mild overclock (3.8ghz) on the cpu just with the fsb while having the ram increased at the same time.

Tvrdi
10-03-2011, 10:21 AM
I guess the code is not finished yet...

TonyD
10-03-2011, 01:20 PM
...
My so far best playability I think I get when I have a mild overclock (3.8ghz) on the cpu just with the fsb while having the ram increased at the same time.

+1. I could not do this on my previous rig as I was using DDR-II 1066 memory, which couldn’t run any faster than the default 533MHz. With the cpu at 4GHz, I had a nominal 5% increase in frame rate compared to that at the default 3.4GHz. My current memory manages 750MHz without any voltage mod, achieved by setting the bus speed to 225MHz (cpu at 3.825GHz). This 12.5% increase in the base clock results in an almost identical increase in frame rates.

This implies that an increase in cpu speed alone will not noticeably improve the frame rates in CloD, but an increase in the data through-put capability certainly does, at least on my system. Perhaps what this sim really needs is an Intel hexa-core with tri-channel memory, or even an Opteron FX with quad-channel memory …

And the ‘process affinity mask’ setting is no longer in the config file, not that fiddling with the settings yielded any gain in my case either.

NedLynch
10-03-2011, 11:52 PM
Got my fsb currently at 224, so pretty much the same.

Tony, how is that video card working for you? Just asking since I am considering to maybe switch to amd, even though the vast majority of gamers are using nvidia. But I am kind of planning to stay with amd pcu wise.

On the other hand I think the new chipsets, like you have, officially now support both, xfire and sli, ahhh decisions, decisions.....:grin:

TonyD
10-05-2011, 11:18 AM
Hi Ned

I’ve used Ati cards for my own machines since the 8500, and have always been satisfied with them. I have had a fair amount of experience with nVidia cards as well in various builds for others, and to be honest, there’s not a lot of difference between the two. I’ve always preferred Ati because of their more efficient approach (which typically results in better pricing), but few users would be aware or even care about the architecture of whatever card they may have, as long as it works OK. In the end it boils down to personal preference, and it’s usually easier to decide on a product that’s familiar. Being an AMD fan as well makes the choice somewhat easier for me since they acquired Ati, but you are right about the 990 chipset – my board supports Sli too, so you are no longer forced to buy Intel if you want to go this route.

As far as the 6970 goes, its performance generally falls between the GeForce 570 and 580, and locally it’s priced a fair bit cheaper than the 570, so a no-brainer in my case. My son has a MSI HD5870 Lightning I’ve used for comparison, and it is slightly faster than his in DX9 games (which are most modern games), but does allow higher detail settings in CloD due to the additional memory, which is what I was interested in. It is a fair bit longer than the 5870 (and my previous 5850), which required me to mod my case in order for it to fit, but at least the power requirement isn’t any greater.

In my opinion, you won’t really experience a huge increase in frame rates over your 470 in CloD, although you will probably be able to turn some detail up without suffering any loss (probably not what you wanted to hear if you were intending buying a new card :-P) I replaced my 5850 because my daughter needed a new card (her 512MB 3870 was getting a bit long in the tooth), rather than wanting more performance. If I had done it for solely for more speed I’d have been disappointed, but that is always the case when replacing a high end card with a newer model.

I hope this has been somewhat helpful.

NedLynch
10-06-2011, 12:34 AM
Very helpful, thanks :grin:.

The 6970 looks really intersting right about now, bang for the buck.
Increase in framerates is one thing, maybe I won't be getting much over my 470, but the big thing is that more and more games are quite vram hungry, so a reason to buy a new card would be mostly more available vram.

Two other things that concern me are:

Would it pose any problem to switch from nvidia to amd, any driver issues, uninstalling necessary or operating system wise?

There is a new standard for pcie x16 coming, 3.0, so far I have only seen intel mobo's with future support for that (when Ivybridge comes out). Have you seen anything from amd regarding this?

TonyD
10-06-2011, 10:32 AM
Switching to a different chipset graphics card? – I would use DriverCleaner to completely remove any trace of the previous drivers before installing the new ones. I had to do this when changing from my 5850 to the 6970 due to Win7 intelligently (!) deciding to keep the previous clock profiles.

PCI-ex 3.0? – this is apparently due on mainboards towards the end of the year, and AMD 7xxx cards are rumoured to be compliant (which should be out around December this year) as is Intel’s IvyBridge (which is rumoured to be out Q2 2012). This will allow a greater band width than is currently available, but is this a huge issue? – it’s not as if the current version 2.0 has reached saturation. And the new standard is apparently backwardly compatible with version 2.0 so there shouldn’t be any hardware conflicts. The trouble is if you decide to wait for the next rumoured hardware standard, you will never purchase, as there is always something bigger-better-faster-more on the way.

NedLynch
10-09-2011, 06:13 PM
I know, you wait and as soon as you think you made up your mind about a purchase something else comes popping up.

With the pcie x16 3.0 I meant more if you maybe came across any info about amd mobos. Eventually they will have to implement it, but as of right now I can only find intel mobos with a future 3.0 compatibility.

I think though that at this point, since I can run things very nicely with my current comp, waiting may be an option due to a lot of new things coming out. At least waiting until the end of the year.

rfxcasey
11-03-2011, 06:14 AM
+1. I could not do this on my previous rig as I was using DDR-II 1066 memory, which couldn’t run any faster than the default 533MHz. With the cpu at 4GHz, I had a nominal 5% increase in frame rate compared to that at the default 3.4GHz. My current memory manages 750MHz without any voltage mod, achieved by setting the bus speed to 225MHz (cpu at 3.825GHz). This 12.5% increase in the base clock results in an almost identical increase in frame rates.

This implies that an increase in cpu speed alone will not noticeably improve the frame rates in CloD, but an increase in the data through-put capability certainly does, at least on my system. Perhaps what this sim really needs is an Intel hexa-core with tri-channel memory, or even an Opteron FX with quad-channel memory …

And the ‘process affinity mask’ setting is no longer in the config file, not that fiddling with the settings yielded any gain in my case either.

I'd have to say it is more the fact that it is DDR3, you are presumably keeping the same timing scheme and increasing the FSB. The higher the clock speed memory runs at the typically looser timing scheme you'll have to use. If you can keep your same timing and increase the frequency you will notice some improvement. If you have a very good set of memory, which would allow for overclocking while keeping the same timing you should tighten the timing rather then overclock.

There is virtually no difference as show in numerous benchmarks between running for instance DDR2 at 400Mhz with 5,5,5,15 timing or 533Mhz at 5,6,6,18. The main advantage of DDR 3 is the in increase data path of 240 pins (not all of which are used for data) as apposed to 184 pin DDR 2. Couple that with increased bus width between say an AM2 to a AM3 or comparable intel processor comparison last generation to current and the performance increase is noticeable. But don't be fooled by the marketing hype that higher clock frequencies automatically translate into big or any performance gains.

The main thing to look at is the quality of the memory and how tight you can get the timing at any frequency. If you spare your memory the overclock and tighten up the timing as much as you can while maintaining stability, the benchmarks would and have shown that there is virtually no difference in gaming performance or otherwise. The overclock will only generate more heat and electrical stress for no gain when compared to the tighter timing scheme.


Hi Ned

I’ve always preferred Ati because of their more efficient approach (which typically results in better pricing), but few users would be aware or even care about the architecture of whatever card they may have, as long as it works OK.

The Nvidia GTX 560 ti outperforms the 6950 while using less power. http://www.hwcompare.com/8889/geforce-gtx-560-ti-vs-radeon-hd-6950/

NedLynch
11-03-2011, 09:40 PM
Funny thing about the timing with my memory.

It is a 1600mhz memory, the motherboard of course defaults it at 1333. But in the SPD of the memory the timings for 1600 are tighter than for 1333, at least according to cpuid/cpu-z.

So with cpu fsb overclocking I also tighened the memory timings to the ones indicated by the 1600mhz SPD, though the memory is not running at 1600. It's running at 1500..ish, due to fsb frequency of 224 and starting out with the default of 1333.

Somewhere I read that AMD likes tighter timings anyway while Intel seems to be better with looser timings.

desigabri
11-12-2011, 09:47 PM
hi,
here are some test I've done looking to understand what influences the IL2 COD performances:

These are the commons options for all test:
http://img828.imageshack.us/img828/3295/commonsettings.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
notes:
- The HD6950 is a 2GB version tweaked up to a HD6970
- The CPU is aircooled with an A70 Corsair (hardly fitted into the cabinet :(:(:( )
- DDR3 16GB Vengeance 1600Mhz
- no windows paging file
- Game loaded onto a 6GB ramdisk to slowdown the game loading times and the missions loading times (and ingame objects during the fly)
- Test made playing the "Black Death" Track and using the FPS SHOW START internal option
- Radio voice comunication OFF in audio menu settings

- Common game settings:
http://img851.imageshack.us/img851/8093/maxlow.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12

overclock 1:
http://img210.imageshack.us/img210/5139/cpucloktest.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
this test seems to show that cpu overclock benefits ingame fps

overclock 2:
http://img507.imageshack.us/img507/6103/cpucloktest2.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
this test seems to comfirm the previus one and show how heavy is the shadows option

GPU overclock:
http://img824.imageshack.us/img824/6847/gpusettingstest.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
this one should show that GPU overclock doesn't help (???!)

affinity:
http://img194.imageshack.us/img194/9850/cpuaffinitytest.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
assigning processes to one, two, theree, four and six cores...
- See that starting from 2 cores and up, launcher.exe doesn't change performances.
- The best system seems to be a dual core processor that gets best results for FPS min.
- A one core CPU isn't able to play as the others

other settings:
http://img819.imageshack.us/img819/2116/othersettingstest.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
you can't see here performance changes working on minor settings or trying to use priority process assignements

power boost:
http://img832.imageshack.us/img832/6060/driverboost.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
same conclusion for catalyst boost settings

very low:
http://img209.imageshack.us/img209/8427/allsettingsonverylow.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
very low settings helps a lot for FPS results, anyway you can see that "effects" are very heavy here, also if setted to MEDIUM for FPS min

repeatable:
http://img580.imageshack.us/img580/7848/attendibilityrepeatibil.jpg
By desigabri (http://profile.imageshack.us/user/desigabri) at 2011-11-12
the last comparision is made only to see that comparing test made in different times but having the same options, get same results.

Before this test I believed it is more important the GPU overclok then the CPU overclock (for ArmA2 I got that conclusion). For IL2 CoD seems the opposite.

diveplane
11-13-2011, 03:01 PM
code still needs a lot of attension for performance

sure they are hard at work for a big frame rate performance boost.