# Core 17 'advanced' WU



## cam51037

Well, I'm hearing reports of around 50k PPD with 7970's with this core, so GPU folders, get cracking!

Finishing a unit now, I'll try and report back with how it goes if it picks up a Core 17 unit.


----------



## aas88keyz

Just saw this core myself and I wasn't supposed to do any folding today. HFM not reporting stats for it. no tpf, no credits, no ppd. Anything I need to add to get this to work.? And did we confirm nvidia need on cpu core? My project P7661.

Keep on foldin!


----------



## cam51037

Quote:


> Originally Posted by *aas88keyz*
> 
> Just saw this core myself and I wasn't supposed to do any folding today. HFM not reporting stats for it. no tpf, no credits, no ppd. Anything I need to add to get this to work.? And did we confirm nvidia need on cpu core? My project P7661.
> 
> Keep on foldin!


Are you downloading the beta WU stats from Stanford for HFM?


----------



## aas88keyz

how? I tried downloading through HFM tools and downloaded a 8018 but nothing else.


----------



## cam51037

Quote:


> Originally Posted by *aas88keyz*
> 
> how? I tried downloading through HFM tools and downloaded a 8018 but nothing else.


Go to Edit>Preferences>Web Settings and change the project download URL to this: http://fah-web.stanford.edu/psummaryC.html

It should download it then.


----------



## mmonnin

Some more info here on proteneer's blog:

http://proteneer.com/blog/?p=1767


----------



## aas88keyz

I can confirm 1 cpu core for nvidia card. I even tried two but the gpu didn't utilize but a couple percentages.


----------



## mmonnin

It WILL use one core. It's OpenCL.


----------



## Evil Penguin

Quote:


> Originally Posted by *mmonnin*
> 
> It WILL use one core. It's OpenCL.


The OpenCL implementation by nVidia.








AMD is doing very well in this regard.


----------



## george_orm

So is that AMD in the game for folding ?


----------



## Evil Penguin

Quote:


> Originally Posted by *george_orm*
> 
> So is that AMD in the game for folding ?


Damn right AMD is back in the game!








My 7970 is getting ~47k PPD.


----------



## aas88keyz

Can confirm 1 cpu core for nvidia. even tried two but the fahcore only utilized another 2 or 3% more. So going with 1. Thanks for the HFM update link. Did that and it now shows me base points for wu but still no tpf or ppd. I will keep refreshing it.

Sorry this was basically a double post cause the pc bsod on me . core 17 the cause? beta will be beta I guess.

update: P7661 may be earning me 28 kppd according to FAHcontrol on a GTX 560ti 448 oc'd to 850MHz gpu clock and 2000MHz gpu memory clock. Here is to hoping the next wu does better. but still a jump from what I was avg this past weekend.

Keep on foldin'!


----------



## mmonnin

HFM won't show TPF. HFM will need an update.


----------



## aas88keyz

Quote:


> Originally Posted by *mmonnin*
> 
> HFM won't show TPF. HFM will need an update.


Thanks for that.


----------



## Gooberman

Getting 30703 PPD with my 7950 oh it's nice lol


----------



## mmonnin

Heck yeah! 5.5mil PPD!!



Haha ok, not really.


----------



## ASSSETS

Oh yeah, but not so much on my 5870, Just 12K


----------



## mmonnin

What drivers are you using? And has that PPD in fahcontrol settled?


----------



## Kevdog

This is my unlocked 6950 @ 880mhz core using 12.11 beta7, it gets almost a steady 99% usage



EDIT:

Just changed drivers to 13.1

LOOK at the PPD now.... lol



EDIT:2

It went back down to the same as the first smaller pic, but the usage goes between 97-98%


----------



## ASSSETS

I upgraded to 7.3.6 and cannot understand how it works. in Task Manager FahCore_17 running with 16% cpu from start up. But loading control or client does not connect anywhere and I cannot pause it.


----------



## ASSSETS

Installed AMD 13.1 and 7.2.9... waiting for PPD. CPU usage 0-1 % GPU stable 98% with dip after each %.
will wait few frames and report back


----------



## aas88keyz

I am not a professional at this but I will tell you what I know. The only way I can use fahcontrol is with the "[email protected]" the is run by your internet browser. Look for it under the startmenu. Once I have started that I close the browser and go back to manually configuring through fahcontrol. from there you will also have a slider bar for how much resources you want for the client. I don't know any better so I set it to full. Also your pause, finish, and continue options can be accessed by right clicking the "folding slot" Got to go to bed now but hope that helped.

Keep on foldin'!


----------



## ASSSETS

Quote:


> Originally Posted by *aas88keyz*
> 
> I am not a professional at this but I will tell you what I know. The only way I can use fahcontrol is with the "[email protected]" the is run by your internet browser. Look for it under the startmenu. Once I have started that I close the browser and go back to manually configuring through fahcontrol. from there you will also have a slider bar for how much resources you want for the client. I don't know any better so I set it to full. Also your pause, finish, and continue options can be accessed by right clicking the "folding slot" Got to go to bed now but hope that helped.
> 
> Keep on foldin'!


It is known problem for 3.6 control trying to connect to client, but cannot. So no way to use slider or any options. webcontrol not connecting and asking to run client.
Just went back to 2.9

For now on stock 5870 it shows 2:50 TPF and 34K PPD


----------



## ASSSETS

TPF 6:17 and 10K PPD


----------



## anubis1127

TPF of 2:40 good for ~40k PPD on my 7950 @ 1100mhz core.


----------



## gboeds

is there a HUGE difference between WUs on this project, or what?

I have two of these running on GTX460s and one running on a GTX480.

The 460s(900MHz, 875MHz) HATE them, estimating 14k PPD compared to 23-24k on 762*

The 480(800MHz) loves it, estimating 36K ppd compared to 32k on 762*


----------



## anubis1127

The WUs are probably pretty sensitive to time, being QRB units. It would seem the 460s can't quite finish them quickly enough for good bonus points.

TPF stabilized a bit on my 7950 after letting it go a bit, down to 2:31 good for 44k PPD.


----------



## Evil Penguin

Keep in mind these core 17 projects have 22k atoms vs the usual 2000< atoms.
Most of what you guys have folded before were implicit projects and not these SMP level explicit projects.


----------



## mmonnin

The equivalent benchmark seems to be higher than 4xx series cards, at least a 460. Since QRB is not linear PPD will be lower for some 4xx cards then with the previous core. Maybe the #atoms have a part in that as well since it's explicit (water molecules in the WU) as Evil P mentioned. Lower GPU cores seem to do better with the smaller units.

No one's forcing people to run beta. 4xx cards can always run other WUs.


----------



## Bal3Wolf

Was gonna test these on my overclocked 7970s but cant seem to get them at all.


----------



## ASSSETS

Got right away, once beta flag added.


----------



## ASSSETS

Got right away, once beta flag added.


----------



## mmonnin

Bal, 7.3 or 7.2


----------



## Bal3Wolf

tried both unless im doing the flags wrong or somthing lol im not a big time folder on gpus any more.

Got it to work uninstalling folding then putting 7.2.9 back on and removing the gpus and adding them back.

lol these units give some major screen lag both cards staying at 97-98% using my modded 13.2 beta 7 drivers.


----------



## cam51037

Darn, now I really feel like I should have a 7950 over a 670, lol.

Whatever, you win some, you lose some.


----------



## mmonnin

If anyone would like to enter their PPD numbers, there is a google doc form here:
https://docs.google.com/spreadsheet/embeddedform?formkey=dDZfZHI5SGpIYVFYbG1EUVpZTm5oOUE6MQ

http://foldingforum.org/viewtopic.php?f=66&t=23845&view=unread#p238531


----------



## PR-Imagery

Lol. 6670 on p7661 = TPF 23m38s / 1544ppd / 2535 credits


----------



## cam51037

Yeah I agree. My PPD have been all over the place with this one.

I added a 7850 into my system to fold on, and now with this new core I'm getting less points than with just SMP-2 and a 670.

Now I have SMP-2, 7850 and a 670.


----------



## mmonnin

All over the place?


----------



## cam51037

Quote:


> Originally Posted by *mmonnin*
> 
> All over the place?


Yeah, sometimes I'm getting 25k PPD, then up to 49k PPD, but always less than 60k PPD, which was what I was getting with i5 3570k 2 cores + 670.


----------



## bfromcolo

Well I could use a little help with configuring this to run the beta WU. Here is my config file:

edit to remove pass code

I am running 7.2.9, what exactly do I add?

Thanks

Edit - apparently trying to paste an XML file as text doesn't show up. so lets try it as a GIF...


----------



## anubis1127

Add this: < extra-core-args >-gpu-vendor=ati< /extra-core-args >

and this: <client-type v='beta'/>


----------



## bfromcolo

Quote:


> Originally Posted by *anubis1127*
> 
> Add this: < extra-core-args >-gpu-vendor=ati< /extra-core-args >
> 
> and this:


I saw that in the OP, but where in the file?


----------



## proteneer

under slot1
add:
< extra-core-args >-gpu-vendor=ati< /extra-core-args >
X

where X is the deviceIndex


----------



## Kevdog

I was using SMP 3 to allow my 6950 to use one of the cores and it was working fine, but I decided to try SMP 4 to see if my PPD would start to fade and it didn't, my SMP actually went up.

So it seems I dont need to dedicate a CPU core to the GPU as I had done with the x16 core


----------



## bfromcolo

OK not trying to be dense but this failed, I assume I added the two lines in the wrong place?

edit to remove pass code

Thanks


----------



## mmonnin

Try the extra core args with no spaces in between the < and >. Position is fine.

Code:



Code:


<extra-core-args>-gpu-vendor=ati</extra-core-args>

Guessing anubis got it from my original post which had spaces, my bad there. I edited it with the code tags with no spaces and added a SS to the OP with how it should look in FAHControl.


----------



## bfromcolo

OK thanks, it looks like this now:



It started without error. But since I had a core 16 WU in progress it picked up where it left off, I guess I'll find out in the morning if it rolls over to a core 17 WU.

Note I got a PM telling me I should not post my pass key in these posts, so I removed it.


----------



## mmonnin

Yep that should work.


----------



## aas88keyz

Quote:


> Originally Posted by *bfromcolo*
> 
> OK thanks, it looks like this now:
> 
> 
> 
> It started without error. But since I had a core 16 WU in progress it picked up where it left off, I guess I'll find out in the morning if it rolls over to a core 17 WU.
> 
> Note I got a PM telling me I should not post my pass key in these posts, so I removed it.


Volunteers that wanna fold under my pass key, I say "Go for it!" The more folders under my key the better I would say. LOL

Keep on foldin'!


----------



## mmonnin

Someone could ruin his QRB bonus by sending back bad WUs. Thats the reason for removing it.


----------



## bfromcolo

Thanks for everyone's help getting this configured. It is up and folding a 7661 now. My 7850 has a TPF 5:25 and PPD of 14K. It is using a full core for FAHCore_17, and I am seeing 97 - 99% GPU usage. I'm running 13.1 WHQL with the 2.7 SDK, so I'll update that at some point and see if anything changes.


----------



## Evil Penguin

Quote:


> Originally Posted by *bfromcolo*
> 
> Thanks for everyone's help getting this configured. It is up and folding a 7661 now. My 7850 has a TPF 5:25 and PPD of 14K. It is using a full core for FAHCore_17, and I am seeing 97 - 99% GPU usage. I'm running 13.1 WHQL with the 2.7 SDK, so I'll update that at some point and see if anything changes.


APP SDK 2.7 is the problem.








That's why I recommended the unmodified Cat. 13.1.
Catalyst 13.2 (again, not modified) performs even better for the 7000 series.


----------



## mmonnin

Yep, thats in the OP. Guessing thats a remnant of the old core. It'd be great if AMD had a driver that'll work for both unmodified for when these WUs run out.


----------



## Krusher33

The latest drivers works for this? This is exciting!


----------



## anubis1127

Quote:


> Originally Posted by *Krusher33*
> 
> The latest drivers works for this? This is exciting!


That they do, I'm using 13.2 beta 7, and got full GPU usage on my 7950. On your 6950, I think you want to use 13.1 though.


----------



## Krusher33

Yeah, downloading now...


----------



## aas88keyz

Obviously me and my fellow nvidians are not as impressed this time. None have very much to say. It looks as though I get a couple more thousand points added on my ppd so I can be happy with that but I am not getting the real big numbers as the amd's. I will continue with my contribution and be happy to do it.


----------



## Krusher33

OH MY GOD I FREAKING LOVE IT!!! Just 3 % CPU usage! I can now fold SMP's with more cores.


----------



## ZDngrfld

They need to hurry up and figure out the QRB. I'm chomping at the bit for a 4P setup... But if I'd end up netting more PPD using video cards, I might as well just *TRY* to be patient and run 7 or so video cards per system...


----------



## ASSSETS

Quote:


> Originally Posted by *Krusher33*
> 
> OH MY GOD I FREAKING LOVE IT!!! Just 3 % CPU usage! I can now fold SMP's with more cores.


What is your TPF? and clock?


----------



## Krusher33

TPF 4:45.

1065 mhz.


----------



## ASSSETS

Quote:


> Originally Posted by *ASSSETS*
> 
> Im running new beta, but stats show only base point 1600. Should be about 4500 as per fahcontrol with credit. How does it work?


Quote:


> Originally Posted by *mmonnin*
> 
> Should ask the core 17 Qs in the core 17 thread. And first post answers your Q.
> http://www.overclock.net/t/1367557/core-17-beta-wu/0_30


Is it answer to my question?
Quote:


> stats-cred = 1600


----------



## mmonnin

3% of a CPU will still kill SMP, just like 807x do on nv.


----------



## Krusher33

Yeah but for me on the old one it was 30% usage. I had to reserve 2 cores to GPU. Now I can fold SMP on 7 cores instead of just 6.


----------



## ASSSETS

Had 17% on mine, now 0-1%


----------



## Bal3Wolf

I can also confirm 13.2beta 7 for my 7970 work great with core 17 im only seeing 1-6% cpu usage and its staying at 1-2% most of the time. Both my gpus are at 98-99% amd better then my modded drivers lol time to reitre from doing them.


----------



## Krusher33

Quote:


> Originally Posted by *Bal3Wolf*
> 
> I can also confirm 13.2beta 7 for my 7970 work great with core 17 im only seeing 1-6% cpu usage and its staying at 1-2% most of the time. Both my gpus are at 98-99% amd better then my modded drivers lol time to reitre from doing them.


Hey dude, thank you SO MUCH for doing that though. You really helped out a lot.

I'd say I will send you a cake but it'd be a lie.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> Hey dude, thank you SO MUCH for doing that though. You really helped out a lot.
> 
> I'd say I will send you a cake but it'd be a lie.


thats ok i dont eat cake haha you can donate me a 5870 cooler tho







my poor 5870 fan header blew so i have 2 120s wire tied to it on the stock heatsink lol uggly and hot.
heres my stats so far looks pretty promising most i ever see amd get on folding lol.


----------



## nova4005

Quote:


> Originally Posted by *Bal3Wolf*
> 
> thats ok i dont eat cake haha you can donate me a 5870 cooler tho
> 
> 
> 
> 
> 
> 
> 
> my poor 5870 fan header blew so i have 2 120s wire tied to it on the stock heatsink lol uggly and hot.
> heres my stats so far looks pretty promising most i ever see amd get on folding lol.


Those are some nice numbers Bal3Wolf!


----------



## Bal3Wolf

Lol yea 104k a day aint bad to bad i mainly boinc but cutting back on everything with high electric bills. Looking to hit 500mil on boinc and 15mil on folding then cut way back for awhile these work units run really cool tho gota love that with the low cpu usage.


----------



## Krusher33

Quote:


> Originally Posted by *Bal3Wolf*
> 
> ... these work units run really cool tho gota love that with the low cpu usage.
> 
> 
> Spoiler: Warning: Spoiler!


That's my favorite part. My excitement about that has not worn off yet.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> That's my favorite part. My excitement about that has not worn off yet.


Yea what kinda ppd do you get out of your 6970 my poor 5870 even on the 13.1 still only gets about 11-13k it seems but 13.1 did drop the cpu usage.


----------



## Krusher33

Mine is about 16.5k last I checked.


----------



## ASSSETS

Is any one can explain to me WHY I GET ONLY 1600 credit with total of 3200 for a day as it should be more than 10K?


----------



## Krusher33

What's your TPF? Are you pausing it during the day? It sounds like to me that each units is taking too long to complete.


----------



## Finrond

Quote:


> Originally Posted by *ASSSETS*
> 
> Is any one can explain to me WHY I GET ONLY 1600 credit with total of 3200 for a day as it should be more than 10K?


Using a passkey?


----------



## Krusher33

Quote:


> Originally Posted by *Finrond*
> 
> Quote:
> 
> 
> 
> Originally Posted by *ASSSETS*
> 
> Is any one can explain to me WHY I GET ONLY 1600 credit with total of 3200 for a day as it should be more than 10K?
> 
> 
> 
> Using a passkey?
Click to expand...

Oh hey... yeah. Make sure you're using a passkey. And if it's a new one, you have to wait till you've submitted 10 WU's on that passkey.


----------



## mmonnin

It's QRB...need a passkey for QRB.


----------



## ASSSETS

I have everything. I fold for TC on this slot.


----------



## Krusher33

Looking at your updates, there's a lot of zero's in between each drop. There's a deadline you must meet for the units to get the bonus points.

http://tc.folding.net/index.php?p=team&team=Big+Bang+Theorists&interval=updates&year=&month=&history=

Edit: Scratch that, I just looked at mine and mine has a bunch too...


----------



## ASSSETS

But kakaostats and extermefolding also shows only 1600 per unit.


----------



## mmonnin

The point value in the log assumes a passkey/QRB, doesn't mean it is so.

Restart the client. Exit all the way to make sure it loads the passkey. Might just want to delete the passkey and re-enter it to make sure it's correct. Then exit and restart.


----------



## ASSSETS

It is sounds strange, but I did.


----------



## giganews35

What's the fastest we got these things going at for NVIDIA? So far my best TPF on these units is at 2:13 and about 53.4k ppd.


----------



## cam51037

Quote:


> Originally Posted by *giganews35*
> 
> What's the fastest we got these things going at for NVIDIA? So far my best TPF on these units is at 2:13 and about 53.4k ppd.


On a 580? That's awesome. Best my 670 has is around 23k PPD, compared to Core 15 39k PPD. :/


----------



## Bal3Wolf

Quote:


> Originally Posted by *giganews35*
> 
> What's the fastest we got these things going at for NVIDIA? So far my best TPF on these units is at 2:13 and about 53.4k ppd.


I managed to gt 2:13 on my 7970 at 1150/1650 havet tried it at 1200mhz yet tho.


----------



## giganews35

Quote:


> Originally Posted by *Bal3Wolf*
> 
> I managed to gt 2:13 on my 7970 at 1150/1650 havet tried it at 1200mhz yet tho.


That's pretty good! If these stick around and more units like these come around, having AMD folders will be crucial to winning in TC! Speaking of TC are you on a team?? lol


----------



## Eagle07

Quote:


> Originally Posted by *Bal3Wolf*
> 
> I managed to gt 2:13 on my 7970 at 1150/1650 havet tried it at 1200mhz yet tho.


What are you running voltage wise?
Temps?

And is it constant at 2:13 or did you just see a low frame...

I was running 2:20 at 1150/1500 on my card.


----------



## Bal3Wolf

Quote:


> Originally Posted by *giganews35*
> 
> That's pretty good! If these stick around and more units like these come around, having AMD folders will be crucial to winning in TC! Speaking of TC are you on a team?? lol


no i mainly boinc im about to give both up for a few months but i wanna hit 15mil on folding and 500mil on boinc befor i do.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Eagle07*
> 
> What are you running voltage wise?
> Temps?
> 
> And is it constant at 2:13 or did you just see a low frame...
> 
> I was running 2:20 at 1150/1500 on my card.


that seem like the normal work unit im doing now both are at 2:13 on both my cards over 8300 points on each card for this unit.


----------



## tictoc

I wonder what CPU resources the core_17 WU is using?

I have been running BOINC on my 7970 in the same machine as my 6870 that I fold on for the TC. With the core_16 I was able to get 100% performance on both cards, but with the new WU BOINC kills the TPF on my 6870.

TPF of my 6870, while running BOINC on my 7970 = 7:53
TPF of my 6870, without running BOINC on my 7970 = 4:43


----------



## ASSSETS

Am I the only one with credit reporting problem?


----------



## mmonnin

You're getting credit.


----------



## Krusher33

Quote:


> Originally Posted by *tictoc*
> 
> I wonder what CPU resources the core_17 WU is using?
> 
> I have been running BOINC on my 7970 in the same machine as my 6870 that I fold on for the TC. With the core_16 I was able to get 100% performance on both cards, but with the new WU BOINC kills the TPF on my 6870.
> 
> TPF of my 6870, while running BOINC on my 7970 = 7:53
> TPF of my 6870, without running BOINC on my 7970 = 4:43


Boy that is weird. Are you using 13.1 drivers? And you did the GPU Vendor flag?


----------



## tictoc

Quote:


> Originally Posted by *Krusher33*
> 
> Boy that is weird. Are you using 13.1 drivers? And you did the GPU Vendor flag?


Drivers are 13.1, and the GPU vendor flag is on. I am also seeing some strange flucuations in GPU usage. The flucuations are totally random, and don't seem to be tied to any other apps running on my machine.



After letting it run for an hour, with nothing else running on my computer, it appears that every time that my usage drops to zero my TPF changes drastically, and doesn't change again until I have another spike in usage. After the latest GPU usage drop my TPF is 6:52.

**Edit** to show the latest TPF jump. This would be awesome if only it were true.



I am going to reload my drivers, and see if that changes anything. So far I would say that the project looks very promising, but it is definitely still a beta and will need some tweaks.

I don't know if Stanford plans it like this, but QRB on the beta WU's is a great idea as it gets more people to run the projects. With more people running the projects the developers get a much better sample of results to learn from.


----------



## mmonnin

My OP said the fluctuations are normal.

And the 7min TFP is closer to normal.

The only problem here is relying on FAHClient to use for PPD/TPF as it doesn't use enough frames.


----------



## tictoc

Quote:


> Originally Posted by *mmonnin*
> 
> My OP said the fluctuations are normal.
> 
> And the 7min TFP is closer to normal.
> 
> The only problem here is relying on FAHClient to use for PPD/TPF as it doesn't use enough frames.


I am glad it is running correctly. I missed the "Expect dips in GPU utrilization around frames" in the OP.









It looks I will have to start folding my 7970 in TC; because the 6870, even with the QRB, is going to get smoked with these WU's.


----------



## mmonnin

7:28TPF with my 6870 at 1GHz. Nothing even compares to 7xxx with the new core.


----------



## labnjab

Just finished the bgb, so I just fired up my 570's (@ 875 mhz) in FAH just to see how they do with the new core.

I'm currently showing 37k ppd per gpu

Defiantly a lot more cpu usage. I fold smp-6 on my 3770k and with out the gpus folding it shows 75% usage (obviously). With the new core I'm now at 100% cpu usage with smp-6. GPU's are now averaging 98-99% with in occasional quick spike to 0, then back up to 99%


----------



## Gungnir

Oh yes.

7950 @ 1000/1575, Cat 13.2 b7


----------



## Krusher33

AMD should be happy. I'm seeing several of us now trying to do whatever to get a new 7900 card.









I'm about to put a lot of stuff up for sale, lol. Probably can only get a used one though.

Though I think there's going to be some rule changes. Because 7900's getting 40-50k and 6900/5800's only getting 15k just doesn't seem fair. I can't remember what someone said their 7800 is getting...


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> AMD should be happy. I'm seeing several of us now trying to do whatever to get a new 7900 card.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm about to put a lot of stuff up for sale, lol. Probably can only get a used one though.
> 
> Though I think there's going to be some rule changes. Because 7900's getting 40-50k and 6900/5800's only getting 15k just doesn't seem fair. I can't remember what someone said their 7800 is getting...


lol id hope so and amd might pull little more out if they tweak the gcn some more.


----------



## 1337LutZ

5xxx series obviously suck on this unit D:


----------



## mmonnin

5k better than a 6xxx.


----------



## 1337LutZ

Quote:


> Originally Posted by *mmonnin*
> 
> 5k better than a 6xxx.


Well my card does have a massive overclock


----------



## 47 Knucklehead

Quote:


> Originally Posted by *mmonnin*
> 
> 5k better than a 6xxx.


Hopefully my 6950 will get that as well. Not to mention my 2 580's and 3 560Ti's.


----------



## cam51037

Quote:


> Originally Posted by *Krusher33*
> 
> AMD should be happy. I'm seeing several of us now trying to do whatever to get a new 7900 card.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm about to put a lot of stuff up for sale, lol. Probably can only get a used one though.
> 
> Though I think there's going to be some rule changes. Because 7900's getting 40-50k and 6900/5800's only getting 15k just doesn't seem fair. I can't remember what someone said their 7800 is getting...


I have a 7850 and it gets around 23k PPD.


----------



## ASSSETS

I do not get freaking credit.
I checked TC ppl. Krusher33 getting 5K and 1337LutZ 5K on one update.


----------



## bfromcolo

Quote:


> Originally Posted by *cam51037*
> 
> I have a 7850 and it gets around 23k PPD.


What is your 7850 clocked at? Mine's at stock 900/1200, I'm seeing a little over 16k PPD.

Moving to 13.2b7 from 13.1 with the 2.7 SDK dropped CPU usage to 2-3% and increased PPD from 14K to 16K.


----------



## cam51037

Quote:


> Originally Posted by *bfromcolo*
> 
> What is your 7850 clocked at? Mine's at stock 900/1200, I'm seeing a little over 16k PPD.
> 
> Moving to 13.2b7 from 13.1 with the 2.7 SDK dropped CPU usage to 2-3% and increased PPD from 14K to 16K.


I've got mine right up at 1250/1450, or something around there.


----------



## Krusher33

Quote:


> Originally Posted by *mmonnin*
> 
> 5k better than a 6xxx.


Huh? You meant 5k better than a 68xx? Because I'm getting about 15k PPD if I'd let it for once. I keep having to shut down for a few hours a day. And last night I shut down over night. Hopefully I will keep it going for a good 24 hrs to get an actual PPD result.
Quote:


> Originally Posted by *47 Knucklehead*
> 
> Quote:
> 
> 
> 
> Originally Posted by *mmonnin*
> 
> 5k better than a 6xxx.
> 
> 
> 
> Hopefully my 6950 will get that as well. Not to mention my 2 580's and 3 560Ti's.
Click to expand...

The 6900's will. As for Nvidia, I'm seeing a lot of Nvidia guys going back to normal flags rather than Beta. They're apparently not getting as good of points with these units.
Quote:


> Originally Posted by *cam51037*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> AMD should be happy. I'm seeing several of us now trying to do whatever to get a new 7900 card.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm about to put a lot of stuff up for sale, lol. Probably can only get a used one though.
> 
> Though I think there's going to be some rule changes. Because 7900's getting 40-50k and 6900/5800's only getting 15k just doesn't seem fair. I can't remember what someone said their 7800 is getting...
> 
> 
> 
> I have a 7850 and it gets around 23k PPD.
Click to expand...

That's good news. But still... unfair if a team has a 7900 and others can't afford it. Because they double that.
Quote:


> Originally Posted by *ASSSETS*
> 
> I do not get freaking credit.
> I checked TC ppl. Krusher33 getting 5K and 1337LutZ 5K on one update.


Dude, I don't know. Either you're not getting the passkey bonuses yet or ... I think maybe you're unstable and turning in bad results. Maybe try stock clocks for a full WU and see what happens. Otherwise I'm out of ideas.


----------



## Krusher33

Question: What floating point is this unit using?


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Krusher33*
> 
> The 6900's will. As for Nvidia, I'm seeing a lot of Nvidia guys going back to normal flags rather than Beta. They're apparently not getting as good of points with these units.


Good to know, thanks.


----------



## mmonnin

Quote:


> Originally Posted by *ASSSETS*
> 
> I do not get freaking credit.
> I checked TC ppl. Krusher33 getting 5K and 1337LutZ 5K on one update.


Everyone else is.
Quote:


> Originally Posted by *bfromcolo*
> 
> What is your 7850 clocked at? Mine's at stock 900/1200, I'm seeing a little over 16k PPD.
> 
> Moving to 13.2b7 from 13.1 with the 2.7 SDK dropped CPU usage to 2-3% and increased PPD from 14K to 16K.


SDK 2.7...

Quote:


> Originally Posted by *Krusher33*
> 
> Huh? You meant 5k better than a 68xx? Because I'm getting about 15k PPD if I'd let it for once. I keep having to shut down for a few hours a day. And last night I shut down over night. Hopefully I will keep it going for a good 24 hrs to get an actual PPD result.
> The 6900's will. As for Nvidia, I'm seeing a lot of Nvidia guys going back to normal flags rather than Beta. They're apparently not getting as good of points with these units.
> That's good news. But still... unfair if a team has a 7900 and others can't afford it. Because they double that.
> Dude, I don't know. Either you're not getting the passkey bonuses yet or ... I think maybe you're unstable and turning in bad results. Maybe try stock clocks for a full WU and see what happens. Otherwise I'm out of ideas.


Yes a 6870 is like 8.4k.
Quote:


> Originally Posted by *Krusher33*
> 
> Question: What floating point is this unit using?


Prob FP32.


----------



## giganews35

Not sure if anything changed with the QRB. I'm now onlly getting a 2:16 TPF and ~46kppd 6-7k less ppd.

edit: I noticed this WU now is pulling 16% CPU cycles.


----------



## Bal3Wolf

Quote:


> Originally Posted by *giganews35*
> 
> Not sure if anything changed with the QRB. I'm now onlly getting a 2:16 TPF and ~46kppd 6-7k less ppd.
> 
> edit: I noticed this WU now is pulling 16% CPU cycles.


Nothing changed for my 7970s 1 at 2mins 15s for 52455 and the other 2mins 16s for 51893 ppd low cpu usage still.


----------



## anubis1127

Nothing changed for me on my 7950 either, still ~43K ppd, TPF 2:33, 1-4% CPU utilization, mostly 1-2%.


----------



## Bal3Wolf

This is what i get at 1200/1650 with 1 of my cards other one wont run 1200.


----------



## ASSSETS

I went through log file and found problem after every time I pause client.
How to fix it?


Spoiler: LOG



**********2013-03-07T13:49:24Z ********
13:49:34:FS00aused
13:49:34:FS00:Shutting core down
13:49:35:WU01:FS00:0x17:WARNING:Console control signal 1 on PID 4264
13:49:35:WU01:FS00:0x17:Exiting, please wait. . .
13:49:38:WU01:FS00:0x17:Completed 1579189 out of 2500000 steps (63%)
13:49:38:WU01:FS00:0x17:Lost lifeline PID 6148, exiting
13:49:38:WU01:FS00:0x17:ERROR:103: Lost client lifeline
13:49:38:WU01:FS00:0x17:[email protected] Core Shutdown: CLIENT_DIED
13:49:38:WU01:FS00:FahCore returned: INTERRUPTED (102 = 0x66)

********2013-03-07T17:22:39Z***********
17:22:50:FS00aused
17:22:50:FS00:Shutting core down
17:22:50:WU00:FS00:0x17:WARNING:Console control signal 1 on PID 6840
17:22:50:WU00:FS00:0x17:Exiting, please wait. . .
17:22:52:WU00:FS00:0x17:Completed 0 out of 2500000 steps (0%)
17:22:52:WU00:FS00:0x17:Lost lifeline PID 4456, exiting
17:22:52:WU00:FS00:0x17:ERROR:103: Lost client lifeline
17:22:52:WU00:FS00:0x17:[email protected] Core Shutdown: CLIENT_DIED
17:22:53:WU01:FS00:Upload 94.37%
17:22:53:WU00:FS00:FahCore returned: INTERRUPTED (102 = 0x66)
17:22:58:WU01:FS00:Upload complete
17:22:58:WU01:FS00:Server responded WORK_ACK (400)
17:22:58:WU01:FS00:Final credit estimate, 5063.00 points


----------



## Krusher33

It does that for everyone.


----------



## giganews35

Quote:


> Originally Posted by *Bal3Wolf*
> 
> Nothing changed for my 7970s 1 at 2mins 15s for 52455 and the other 2mins 16s for 51893 ppd low cpu usage still.


Hmm.. I did upgrade to the new v7 client after my old one kept dying on me (local:connecting Inactive status)

Maybe my 580 is degrading... 2:16 tpf of 46k ppd doesn't sound right either. I'm only receiving ~7300 credit now instead of ~8200 at 2:13 tpf. I'm confused.


----------



## PandaSPUR

So glad I found this thread. Recently upgraded from my GTX 560 to a 7970 (1010Mhz Core, 1375Mhz Memory) and was so disappointed when I tried folding with it.

Read all the issues with requiring a modified driver... sigh.
Then THIS happens!









Now the 7970 is running at 100% and giving me 42k PPD. FahCore_17 has stayed below 5% CPU usage as well, so thats nice.


----------



## Bal3Wolf

Quote:


> Originally Posted by *giganews35*
> 
> Hmm.. I did upgrade to the new v7 client after my old one kept dying on me (local:connecting Inactive status)
> 
> Maybe my 580 is degrading... 2:16 tpf of 46k ppd doesn't sound right either. I'm only receiving ~7300 credit now instead of ~8200 at 2:13 tpf. I'm confused.


the tpf does go up and down if it got real slow for a few mins it would bring down the credits you would make.


----------



## ASSSETS

Quote:


> Originally Posted by *Krusher33*
> 
> It does that for everyone.


Oh... will run on stock, good I just got new unit. will see in 9 hours.


----------



## giganews35

Quote:


> Originally Posted by *Bal3Wolf*
> 
> the tpf does go up and down if it got real slow for a few mins it would bring down the credits you would make.


I'll keep monitoring it to see if anything changes. Maybe I should just not fold on the CPU I'm only getting 8k ppd now that 1 core is dedicated just for this GPU.


----------



## Krusher33

Quote:


> Originally Posted by *PandaSPUR*
> 
> So glad I found this thread. Recently upgraded from my GTX 560 to a 7970 (1010Mhz Core, 1375Mhz Memory) and was so disappointed when I tried folding with it.
> 
> Read all the issues with requiring a modified driver... sigh.
> Then THIS happens!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Now the 7970 is running at 100% and giving me 42k PPD. FahCore_17 has stayed below 5% CPU usage as well, so thats nice.


Is that stock clocks?
Quote:


> Originally Posted by *ASSSETS*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> It does that for everyone.
> 
> 
> 
> Oh... will run on stock, good I just got new unit. will see in 9 hours.
Click to expand...

NINE HOURS?! Holy moly...


----------



## Wheezo

Getting about 22k - 24k on my HD7870 @ 1160 / 1300. TPF of around 3:56min ( I have seen it lower but I am using my PC right now) and a total credit reported on v7 of 6214 but will likely be credited a bit more than that.

All in all a sweet unit, I have never seen my PC put out this much ppd.

Running stock 13.2 beta 7 drivers.


----------



## PandaSPUR

Quote:


> Originally Posted by *Krusher33*
> 
> Is that stock clocks?


Yes and no. Its not reference clocks for the 7970 but it is the clock that my card came configured to, because its an "OC Edition" MSI 7970.


----------



## Krusher33

Quit driving the middle lane and get in the fast one.


----------



## martinhal

Do I need to add the passkey in the gpu slot too or is it ok in the main client ?


----------



## mmonnin

Quote:


> Originally Posted by *giganews35*
> 
> Hmm.. I did upgrade to the new v7 client after my old one kept dying on me (local:connecting Inactive status)
> 
> Maybe my 580 is degrading... 2:16 tpf of 46k ppd doesn't sound right either. I'm only receiving ~7300 credit now instead of ~8200 at 2:13 tpf. I'm confused.


The default installtion of 7.3 downloads WUs at 98%. The previous 7.2.9 version downloaded WUs at 99%. That's 2min against your QRB. Adding the flag 'next-unit-percentage' with value of 100 will change that back to 99% but thats the best it can do. A ticket has been raised to fix this. If you have a quick connection setting it to 100% will be a benefit with QRB units. It may be tiny, but a little.

When viewing PPD estimates from FAhcontrol the PC must be left alone. Any cycles taken from the core screws with the PPD much more than HFM. Viewing the log would be another way to get a better estimate of actual TPF.
Quote:


> Originally Posted by *martinhal*
> 
> Do I need to add the passkey in the gpu slot too or is it ok in the main client ?


Either will work. Its like a global or local setting.


----------



## anubis1127

Quote:


> Originally Posted by *martinhal*
> 
> Do I need to add the passkey in the gpu slot too or is it ok in the main client ?


Main client is fine, that's how I have mine, so SMP and GPU are using the same passkey.


----------



## martinhal

Quote:


> Originally Posted by *anubis1127*
> 
> Main client is fine, that's how I have mine, so SMP and GPU are using the same passkey.


Great thanks ! I'm glad I found this thread . I'm now pulling 136K PPD on my 3 7970.

Edit : Dropped my cpu cores to 4 now at 185 K ppd woot woot


----------



## ASSSETS

I put stock clock on GPU, removed affinity CPU locking and put core priority to low.
Do not do anything just watching GPU and CPU usage. After about 10 minutes I see one core goes to 30-40% on CoreTemp, but Task Manager shows no difference. Do you use something different to monitor CPU usage?


----------



## mmonnin

Whats the refresh interval on core temp? Task Manager doesn't seem to refresh all that fast. But no, its pretty much always 0-3%.


----------



## Krusher33

You can change the refresh rate in task manager.

What client are you using?


----------



## ASSSETS

Client 7.2.9. I tried 7.3.6, but did not work for me.
Both have same refresh rate 1c
That core worked for about 3-5 min and everything went back .
Maybe some windows work, but why it does not show up in manager... Have no idea.
FAH shows 00:06:13 FTP with 14% done 9 hours to go.


----------



## tictoc

Quote:


> Originally Posted by *giganews35*
> 
> Not sure if anything changed with the QRB. I'm now onlly getting a 2:16 TPF and ~46kppd 6-7k less ppd.
> 
> edit: I noticed this WU now is pulling 16% CPU cycles.


Quote:


> Originally Posted by *giganews35*
> 
> Hmm.. I did upgrade to the new v7 client after my old one kept dying on me (local:connecting Inactive status)
> 
> Maybe my 580 is degrading... 2:16 tpf of 46k ppd doesn't sound right either. I'm only receiving ~7300 credit now instead of ~8200 at 2:13 tpf. I'm confused.


Quote:


> Originally Posted by *giganews35*
> 
> I'll keep monitoring it to see if anything changes. Maybe I should just not fold on the CPU I'm only getting 8k ppd now that 1 core is dedicated just for this GPU.


If you are on a new WU, these WU's could be similiar to the 11292 WU's. Looking back at my HFM log for January and February, there are 3 distinct 11292 WU's. The PPD on the 11292 WU's run anywhere from 7900 PPD, to 9600 PPD, on my 6870.


----------



## mmonnin

3s in TPF is more likely to be difference in a little user interference.


----------



## giganews35

it wasn't so much the 3 second tpf but the credit loss for just a 3 second increase. 46k ppd for 2:16 compared to 52k for 2:13. But it seems like its back to normal.

Last credit was for 8058. And now a 2:16 tpf is reporting 52k ppd.. that is in the v7 client so it might not be accurate.. But I am staying off the computer. I am actually at work monitoring it only periodically (every 2 hrs or so) with TeamViewer. I never stay on my TC machine. Only time I'm on there is when I connect with TeamViewer to monitor the client.

edit: ok and now my 450 went to 35% usage... its gotta be this client messing with me.


----------



## 1337LutZ

Quote:


> Originally Posted by *giganews35*
> 
> it wasn't so much the 3 second tpf but the credit loss for just a 3 second increase. 46k ppd for 2:16 compared to 52k for 2:13. But it seems like its back to normal.
> 
> Last credit was for 8058. And now a 2:16 tpf is reporting 52k ppd.. that is in the v7 client so it might not be accurate.. But I am staying off the computer. I am actually at work monitoring it only periodically (every 2 hrs or so) with TeamViewer. I never stay on my TC machine. Only time I'm on there is when I connect with TeamViewer to monitor the client.
> 
> edit: ok and now my 450 went to 35% usage... its gotta be this client messing with me.


The cores are buggy, my 5870 also had a WU go to around 35-70% halfway and fixed it after 20%


----------



## mmonnin

Someone else on internal test and I have gotten regular core 15 WUs so there may be a shortage atm.


----------



## mmonnin

New project on beta if you happen to get one. Its on psummary for the same 1600 points.
Quote:


> [19:43] new project 7662
> [19:43] an NVE version
> [19:43] of the 7661
> [19:43] virtually identical
> [19:43] its a statistical mechanical thing


----------



## ZDngrfld

Quote:


> Originally Posted by *mmonnin*
> 
> New project on beta if you happen to get one. Its on psummary for the same 1600 points.


I updated my hfm earlier tonight and noticed that they added p7662. Wonder why they haven't added p7661...


----------



## mmonnin

Both are on B and C psummary pages. You must already have it.


----------



## ZDngrfld

Quote:


> Originally Posted by *mmonnin*
> 
> Both are on B and C psummary pages. You must already have it.


Will the p7662 report the same as p7661? Just show the percentage and no PPD?


----------



## Krusher33

Yeah that's driving me nuts, lol. I keep having to log into my logmein account to see how it's folding.


----------



## ASSSETS

As I promised 9 hours later update on my core.
For some reason probably at last stage of folding pc froze. After reset and change all bios setting to auto to prevent any other questions I loaded.
I have checkpoint every 5 min, so I thought I will finish WU to see what credit I got.
After start I saw fresh unit with 0%. Here is my log with BAD UNIT


Spoiler: Warning: Spoiler!



04:23:51:WU00:FS00:Starting
04:23:51:WU00:FS00:Running FahCore: C:\FOLDING\FAHClient/FAHCoreWrapper.exe C:/FOLDING/FAHData/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/beta/Core_17.fah/FahCore_17.exe -dir 00 -suffix 01 -version 702 -lifeline 5532 -checkpoint 5 -gpu 0
04:23:51:WU00:FS00:Started FahCore on PID 5804
04:23:51:Started thread 7 on PID 5532
04:23:51:WU00:FS00:Core PID:5824
04:23:51:WU00:FS00:FahCore 0x17 started
04:23:51:WU00:FS00:0x17:*********************** Log Started 2013-03-08T04:23:51Z ***********************
04:23:51:WU00:FS00:0x17roject: 7661 (Run 5, Clone 6, Gen 5)
04:23:51:WU00:FS00:0x17:Unit: 0x0000000bff3d48355134f4de6cee8154
04:23:51:WU00:FS00:0x17:CPU: 0x00000000000000000000000000000000
04:23:51:WU00:FS00:0x17:Machine: 0
04:23:51:WU00:FS00:0x17igital signatures verified
04:23:53:Server connection id=1 on 0.0.0.0:36330 from 127.0.0.1
04:23:53:Started thread 8 on PID 5532
04:23:54:WU00:FS00:0x17:ERROR:Guru Meditation #60c96e3f5e25c84d.fe72e20436109c78 (5296688.5296688) '00/01/checkpointState.xml'
04:23:54:WU00:FS00:0x17:WARNING:Unexpected exit() call
04:23:54:WU00:FS00:0x17:WARNING:Unexpected exit from science code
04:23:54:WU00:FS00:0x17:Saving result file logfile_01.txt
04:23:54:WU00:FS00:0x17:Saving result file checkpt.crc
04:23:54:WU00:FS00:0x17:Saving result file log.txt
04:23:54:WU00:FS00:0x17:WARNING:While cleaning up: Failed to remove directory '01': boost::filesystem::remove: The process cannot access the file because it is being used by another process: "01\checkpointState.xml"
04:23:54:WU00:FS00:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
04:23:54:WARNING:WU00:FS00:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
04:23:54:WU00:FS00:Sending unit results: id:00 state:SEND error:FAULTY project:7661 run:5 clone:6 gen:5 core:0x17 unit:0x0000000bff3d48355134f4de6cee8154
04:23:54:WU00:FS00:Uploading 6.86KiB to 171.67.108.149
04:23:54:WU00:FS00:Connecting to 171.67.108.149:8080
04:23:54:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
04:23:54:WU00:FS00:Upload complete
04:23:54:WU00:FS00:Server responded WORK_ACK (400)
04:23:54:WU00:FS00:Cleaning up
04:23:55:WU01:FS00:News: Welcome to [email protected]
04:23:55:WU01:FS00:Assigned to work server 171.67.108.149
04:23:55:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:"Radeon HD 5870 (Cypress)" from 171.67.108.149
04:23:55:WU01:FS00:Connecting to 171.67.108.149:8080
04:23:55:WU01:FS00ownloading 1.62MiB
04:23:57:WU01:FS00ownload complete
04:23:57:WU01:FS00:Received Unit: id:01 stateOWNLOAD error:NO_ERROR project:7661 run:18 clone:0 gen:8 core:0x17 unit:0x00000018ff3d48355134fd8daa3058e3
04:23:57:WU01:FS00:Starting
04:23:57:WU01:FS00:Running FahCore: C:\FOLDING\FAHClient/FAHCoreWrapper.exe C:/FOLDING/FAHData/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/beta/Core_17.fah/FahCore_17.exe -dir 01 -suffix 01 -version 702 -lifeline 5532 -checkpoint 5 -gpu 0
04:23:57:WU01:FS00:Started FahCore on PID 5456
04:23:57:Started thread 9 on PID 5532
04:23:57:WU01:FS00:Core PID:5420
04:23:57:WU01:FS00:FahCore 0x17 started
04:23:57:WU01:FS00:0x17:*********************** Log Started 2013-03-08T04:23:57Z ***********************
04:23:57:WU01:FS00:0x17roject: 7661 (Run 18, Clone 0, Gen 8)
04:23:57:WU01:FS00:0x17:Unit: 0x00000018ff3d48355134fd8daa3058e3
04:23:57:WU01:FS00:0x17:CPU: 0x00000000000000000000000000000000
04:23:57:WU01:FS00:0x17:Machine: 0
04:23:57:WU01:FS00:0x17:Reading tar file state.xml
04:23:57:WU01:FS00:0x17:Reading tar file system.xml
04:23:57:WU01:FS00:0x17:Reading tar file integrator.xml
04:23:57:WU01:FS00:0x17:Reading tar file core.xml
04:23:57:WU01:FS00:0x17igital signatures verified
04:24:11:WU01:FS00:0x17:Completed 0 out of 2500000 steps (0%)
04:36:42:WU01:FS00:0x17:Completed 50000 out of 2500000 steps (2%)
04:39:19:Server connection id=2 on 0.0.0.0:36330 from 127.0.0.1
04:39:19:Started thread 10 on PID 5532


----------



## joker927

Anyone able to keep 95%+ GPU usage while running SMP on all cores with Core17 on the GPU? I'm averaging ~93% with long dips into the 90% zone. I'm folding on all CPU cores via a Linux VM and GPU is a a 7950.


----------



## PR-Imagery

I wasn't folding on the cpu but I had a max load on all cores doing a render and kept 99% on both my cards, 6670 ans 5770.


----------



## mmonnin

Quote:


> Originally Posted by *ZDngrfld*
> 
> Will the p7662 report the same as p7661? Just show the percentage and no PPD?


Most likely. I just got one on my 6870. proteneer mentioned turning off 7661 so only 7662 WUs would be assigned.

What drivers joker? 13.2 beta7 are said to work best for 7xxx.


----------



## joker927

Quote:


> Originally Posted by *PR-Imagery*
> 
> I wasn't folding on the cpu but I had a max load on all cores doing a render and kept 99% on both my cards, 6670 ans 5770.


Strange, I don't even get 99% with nothing running. Only 98%.


----------



## PandaSPUR

Quote:


> Originally Posted by *joker927*
> 
> Anyone able to keep 95%+ GPU usage while running SMP on all cores with Core17 on the GPU? I'm averaging ~93% with long dips into the 90% zone. I'm folding on all CPU cores via a Linux VM and GPU is a a 7950.


I'm having similar issues. I run a Linux VM for folding, and as long as thats running, Core17 will only utilize 60-70% GPU.

One fix is to set Core17 priority to "Normal" instead of the default "Low" but then it restarts and resets to "Low" whenever it starts a new WU.

Other fix is to tell the VM to only use 3 cores.. but then I feel like thats a waste.

Is there any way to tell Core17 to default to "Normal" Priority? It only uses <5% CPU from what I see so I dont care if its left on "Normal"


----------



## mmonnin

The correct thing to do would be to run SMP3. SMP4 with only 3 cores open will take a larger hit to PPD than lowering it to SMP3.


----------



## PandaSPUR

Quote:


> Originally Posted by *mmonnin*
> 
> The correct thing to do would be to run SMP3. SMP4 with only 3 cores open will take a larger hit to PPD than lowering it to SMP3.


Right thats what im actually doing, shoulda been more clear.

But its still such a waste ): Cause now its just bouncing my 4 cores between 70-90% constantly.


----------



## Krusher33

Quote:


> Originally Posted by *ASSSETS*
> 
> As I promised 9 hours later update on my core.
> For some reason probably at last stage of folding pc froze. After reset and change all bios setting to auto to prevent any other questions I loaded.
> I have checkpoint every 5 min, so I thought I will finish WU to see what credit I got.
> After start I saw fresh unit with 0%. Here is my log with BAD UNIT
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> 04:23:51:WU00:FS00:Starting
> 04:23:51:WU00:FS00:Running FahCore: C:\FOLDING\FAHClient/FAHCoreWrapper.exe C:/FOLDING/FAHData/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/beta/Core_17.fah/FahCore_17.exe -dir 00 -suffix 01 -version 702 -lifeline 5532 -checkpoint 5 -gpu 0
> 04:23:51:WU00:FS00:Started FahCore on PID 5804
> 04:23:51:Started thread 7 on PID 5532
> 04:23:51:WU00:FS00:Core PID:5824
> 04:23:51:WU00:FS00:FahCore 0x17 started
> 04:23:51:WU00:FS00:0x17:*********************** Log Started 2013-03-08T04:23:51Z ***********************
> 04:23:51:WU00:FS00:0x17roject: 7661 (Run 5, Clone 6, Gen 5)
> 04:23:51:WU00:FS00:0x17:Unit: 0x0000000bff3d48355134f4de6cee8154
> 04:23:51:WU00:FS00:0x17:CPU: 0x00000000000000000000000000000000
> 04:23:51:WU00:FS00:0x17:Machine: 0
> 04:23:51:WU00:FS00:0x17igital signatures verified
> 04:23:53:Server connection id=1 on 0.0.0.0:36330 from 127.0.0.1
> 04:23:53:Started thread 8 on PID 5532
> 04:23:54:WU00:FS00:0x17:ERROR:Guru Meditation #60c96e3f5e25c84d.fe72e20436109c78 (5296688.5296688) '00/01/checkpointState.xml'
> 04:23:54:WU00:FS00:0x17:WARNING:Unexpected exit() call
> 04:23:54:WU00:FS00:0x17:WARNING:Unexpected exit from science code
> 04:23:54:WU00:FS00:0x17:Saving result file logfile_01.txt
> 04:23:54:WU00:FS00:0x17:Saving result file checkpt.crc
> 04:23:54:WU00:FS00:0x17:Saving result file log.txt
> 04:23:54:WU00:FS00:0x17:WARNING:While cleaning up: Failed to remove directory '01': boost::filesystem::remove: The process cannot access the file because it is being used by another process: "01\checkpointState.xml"
> 04:23:54:WU00:FS00:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
> 04:23:54:WARNING:WU00:FS00:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
> 04:23:54:WU00:FS00:Sending unit results: id:00 state:SEND error:FAULTY project:7661 run:5 clone:6 gen:5 core:0x17 unit:0x0000000bff3d48355134f4de6cee8154
> 04:23:54:WU00:FS00:Uploading 6.86KiB to 171.67.108.149
> 04:23:54:WU00:FS00:Connecting to 171.67.108.149:8080
> 04:23:54:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
> 04:23:54:WU00:FS00:Upload complete
> 04:23:54:WU00:FS00:Server responded WORK_ACK (400)
> 04:23:54:WU00:FS00:Cleaning up
> 04:23:55:WU01:FS00:News: Welcome to [email protected]
> 04:23:55:WU01:FS00:Assigned to work server 171.67.108.149
> 04:23:55:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:"Radeon HD 5870 (Cypress)" from 171.67.108.149
> 04:23:55:WU01:FS00:Connecting to 171.67.108.149:8080
> 04:23:55:WU01:FS00ownloading 1.62MiB
> 04:23:57:WU01:FS00ownload complete
> 04:23:57:WU01:FS00:Received Unit: id:01 stateOWNLOAD error:NO_ERROR project:7661 run:18 clone:0 gen:8 core:0x17 unit:0x00000018ff3d48355134fd8daa3058e3
> 04:23:57:WU01:FS00:Starting
> 04:23:57:WU01:FS00:Running FahCore: C:\FOLDING\FAHClient/FAHCoreWrapper.exe C:/FOLDING/FAHData/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/beta/Core_17.fah/FahCore_17.exe -dir 01 -suffix 01 -version 702 -lifeline 5532 -checkpoint 5 -gpu 0
> 04:23:57:WU01:FS00:Started FahCore on PID 5456
> 04:23:57:Started thread 9 on PID 5532
> 04:23:57:WU01:FS00:Core PID:5420
> 04:23:57:WU01:FS00:FahCore 0x17 started
> 04:23:57:WU01:FS00:0x17:*********************** Log Started 2013-03-08T04:23:57Z ***********************
> 04:23:57:WU01:FS00:0x17roject: 7661 (Run 18, Clone 0, Gen 8)
> 04:23:57:WU01:FS00:0x17:Unit: 0x00000018ff3d48355134fd8daa3058e3
> 04:23:57:WU01:FS00:0x17:CPU: 0x00000000000000000000000000000000
> 04:23:57:WU01:FS00:0x17:Machine: 0
> 04:23:57:WU01:FS00:0x17:Reading tar file state.xml
> 04:23:57:WU01:FS00:0x17:Reading tar file system.xml
> 04:23:57:WU01:FS00:0x17:Reading tar file integrator.xml
> 04:23:57:WU01:FS00:0x17:Reading tar file core.xml
> 04:23:57:WU01:FS00:0x17igital signatures verified
> 04:24:11:WU01:FS00:0x17:Completed 0 out of 2500000 steps (0%)
> 04:36:42:WU01:FS00:0x17:Completed 50000 out of 2500000 steps (2%)
> 04:39:19:Server connection id=2 on 0.0.0.0:36330 from 127.0.0.1
> 04:39:19:Started thread 10 on PID 5532


How long before your next one is done after doing the BIOS reset?


----------



## ASSSETS

I finished one, went trough fine, waiting..
Ok, I got again 1600
It cannot be problem with passkey, because in this case it would not show up in TC.
Now running new 7662 WU. Another 10 hours for result.....


----------



## Krusher33

Why would it not show up in TC?


----------



## TheBadBull

I seem to have the same problem as asssets.

My 5770 usually gets about 7.5k ppd, but with this unit it's dropped to 4.4k. It's a pretty big drop but then suddenly I notice that I don't get any bonuses either.

Finishing a 3rd one of these in 50 minutes.

my TPF right now is 11mins 37 secs.


----------



## mmonnin

Asssets means that if there was no passkey then points would not show up in the AMD Cat for TC.

The other options are too many failed WUs dropping succesful rate below 80%, and then no bonus will be applied. That or less than 10 WUs but thats prob not the case.

And for these WUs if you select Effective Rate instead of last 3 frames HFM can calculate PPD/etc. It lowered PPD of my other machines a bit.


----------



## TheBadBull

hmm. my tpf seems to have halved after I updated drivers. wu is finished in 8 minutes.


----------



## Krusher33

That's why I'm asking because if passkey is taken out then there's no bonus pts which is what we're trying to get him some.

But I guess it's possible he has turned in a lot of bad units and not know it? Would it take 10 good units before he gets bonus pts again or how does that work?


----------



## TheBadBull

As far as I know I haven't turned in any bad units in a while. What's the best way of checking anyways?


----------



## mmonnin

Quote:


> Originally Posted by *Krusher33*
> 
> That's why I'm asking because if passkey is taken out then there's no bonus pts which is what we're trying to get him some.
> 
> But I guess it's possible he has turned in a lot of bad units and not know it? Would it take 10 good units before he gets bonus pts again or how does that work?


If there is no passkey then it won't show up on the TC stats page.

A mod from FF mentioned asssets passkeys are qualified except for one which has 3 Wus returned for it.

FF mods can check on passkeys if you PM one.


----------



## ASSSETS

I fold only gpu for TC and this passkey I'm using for last 4 months as I started with TC.


----------



## aas88keyz

This might be stupid of me to guess but even if he had been using a passkey for his gpu for past four months he probably wouldn't notice if it was wrong being there shouldn't have been a QRB for his gpu anyway. A good day to notice that the passkey wasn't working would probably be the day that the gpus started qualifying for QRB's. Like the Core 17. Please be easy on me cause I may not know any better


----------



## Krusher33

That's what I'm saying. Under the old system you got points even for bad units didn't we?


----------



## ASSSETS

Quote:


> Originally Posted by *aas88keyz*
> 
> This might be stupid of me to guess but even if he had been using a passkey for his gpu for past four months he probably wouldn't notice if it was wrong being there shouldn't have been a QRB for his gpu anyway. A good day to notice that the passkey wasn't working would probably be the day that the gpus started qualifying for QRB's. Like the Core 17. Please be easy on me cause I may not know any better


It was in my mind also. But how to track it? I sent mmonnin my passkey, maybe he can get some answers...


----------



## ASSSETS

I had my gpu overclocked, but log did not show any errors (i have detailed log, 5) and folding was stable.


----------



## mmonnin

ChelseaOilman checked that passkey and it shows only 8 WUs completed so far. I guess previous WUs didn't count.


----------



## Krusher33

Asssets, you sure you're using the same passkey as you have been using in TC?


----------



## gboeds

if all he has ever folded on that passkey are GPU WUs, those do not count toward the 10 needed for bonus unless they are QRB units, no?


----------



## TheBadBull

That might explain a bit.


----------



## Krusher33

Quote:


> Originally Posted by *gboeds*
> 
> if all he has ever folded on that passkey are GPU WUs, those do not count toward the 10 needed for bonus unless they are QRB units, no?


I didn't have 10 previous QRB units on my GPU passkey and I got the bonus pts right away I think. (Not that I'm aware of anyways)


----------



## ASSSETS

I did this passkey for TC GPU only. And fold only AMD GPU units on it which did not have bonuses. I sent PM to Donkey folding editor to check passkey they have. This is the only passkey I use, because I fold only this card and nothing more. Passkey is in gpu slot and not global on client.
As we spoke before if passkey is wrong on my client, my points should not be showed on TC page OR TC page show wrong numbers than.


----------



## ASSSETS

Quote:


> Originally Posted by *mmonnin*
> 
> ChelseaOilman checked that passkey and it shows only 8 WUs completed so far. I guess previous WUs didn't count.


This looks correct to ma as it is only 8 finished new beta units.


----------



## Krusher33

Well... let's see you get them bonus points after 2 more units.

TheBadBull, you may need to check like Asssets did.

But I don't understand why I got the bonus pts quickly as the passkey I'm using has only been used for TC's AMD Cat.


----------



## ASSSETS

I got reply from Donkey that they have same passkey as I'm using, which FF reported with 8 WU.
I can think of old AMD core was so old that did not required passkey and did not record somewhere on FF side. But it still was in Standford logs as our folding editor were able to track folded points by passkey as we had statistic mess every few first days of the month in TC and they were fixing points.


----------



## Bal3Wolf

Gota love core17 i have 95k for today so far with another 45-55K to finsih for today.


----------



## aas88keyz

Well I just started my gpu folding again after giving it a break for a day. Now I am in a dry spell as it is no longer giving me any more wu's. hopes things change for me this weekend. I would hate to have to switch to regular wu's.


----------



## mmonnin

Quote:


> Originally Posted by *Krusher33*
> 
> I didn't have 10 previous QRB units on my GPU passkey and I got the bonus pts right away I think. (Not that I'm aware of anyways)


The passkey I used for AMD Cat I got just for that and have never folded on it with anything else but I immediately got QRB. Not sure what the difference is for ASSSETS but servers see 8 WUs atm.

Quote:


> Originally Posted by *ASSSETS*
> 
> This looks correct to ma as it is only 8 finished new beta units.


2 more and you should be good then.

Quote:


> Originally Posted by *aas88keyz*
> 
> Well I just started my gpu folding again after giving it a break for a day. Now I am in a dry spell as it is no longer giving me any more wu's. hopes things change for me this weekend. I would hate to have to switch to regular wu's.


Yes it seems 7662 WUs have run dry. Reports of several on internal beta not being able to send WUs right away and getting non-beta WUs. You may see something like this while we wait for a WU.

21:53:57:WU00:FS00:Upload complete
21:53:57:WU00:FS00:Server responded PLEASE_WAIT (464)
21:53:57:WARNING:WU00:FS00:Failed to send results, will try again later


----------



## giganews35

Assign server down? Even my SMP won't pick up any units.
Quote:


> 23:00:04:ERROR:WU01:FS01:Exception: Failed to connect to 171.67.108.149:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.


----------



## ZDngrfld

Quote:


> Originally Posted by *giganews35*
> 
> Assign server down? Even my SMP won't pick up any units.


That's what I'm dealing with right now... I'm assuming that assignment server is down. Same one my clients are connecting to.


----------



## tictoc

I also wasn't getting QRB on my TC passkey, because I had only folded on my 6870 with that passkey. I noticed it after the first unit, and i just assumed that core_16 WU's did not count towards 10 completed SMP/GPU QRB units. I switched my passkey to one of my other passkeys, and I am now getting bonus points.

I am almost halfway to my average TC points for the month, and I have only been running the beta WU's for 1 day.


----------



## mmonnin

Quote:


> Originally Posted by *giganews35*
> 
> Assign server down? Even my SMP won't pick up any units.


Yes I had to take mine off advanced to get a WU.


----------



## cam51037

So let me get a couple things straight here:

-Do you need a new passkey for GPU QRB?
-How much does it increase PPD with GPU QRB?
-I'm also getting down servers, what a pain!

Thanks so much!


----------



## 47 Knucklehead

Quote:


> Originally Posted by *giganews35*
> 
> Assign server down? Even my SMP won't pick up any units.


Looking that way. I got 3 GPU's that are sitting there twiddling their thumbs with nothing to do.


----------



## tictoc

It looks like the server went down at around 11:00 PST, but it was "Accepting" as of 3:10 PST.

Server Status


----------



## jesusboots

The server I was downloading from has run dry.


----------



## mmonnin

Remove the beta flag and restart the slot to get another WU. Or wait for whenever the client sends you back to another server.

Cam, QRB depends on how fast the WU is completed. No QRB will be 1600 points, with can be several times more than that. But with the same TPF.


----------



## Krusher33

That's what I did. I just removed the Beta flag, restarted client, and it just moved right along with the older units. The beta one is still trying to send though.

Edit: Oh crud... now I've got the 30% GPU usage issue thanks to the new drivers.


----------



## cam51037

Quote:


> Originally Posted by *mmonnin*
> 
> Remove the beta flag and restart the slot to get another WU. Or wait for whenever the client sends you back to another server.
> 
> Cam, QRB depends on how fast the WU is completed. No QRB will be 1600 points, with can be several times more than that. But with the same TPF.


Yeah haven't gotten a QRB yet, all my units have only been worth 1600 points.


----------



## jesusboots

Quote:


> Originally Posted by *cam51037*
> 
> Yeah haven't gotten a QRB yet, all my units have only been worth 1600 points.


10 units. Then you get the bonus


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> That's what I did. I just removed the Beta flag, restarted client, and it just moved right along with the older units. The beta one is still trying to send though.
> 
> Edit: Oh crud... now I've got the 30% GPU usage issue thanks to the new drivers.


Lol 24% here for my 2x 7970s dangit i was gonna hit over 150k today.

Got beta units on some of my other cards again so they are back up and sending.


----------



## Krusher33

Dog gum it.

I'm just going to leave it on the modded driver.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> Dog gum it.
> 
> I'm just going to leave it on the modded driver.


lol kinda a double edge sword officals good for core 17 modded good for core 16 i like the officals for now atleast for my 7970 cards.


----------



## ASSSETS

What a mess LOL


----------



## ASSSETS

Quote:


> Originally Posted by *cam51037*
> 
> Yeah haven't gotten a QRB yet, all my units have only been worth 1600 points.


So maybe we can see some improvement on our i7 folder for TC


----------



## mmonnin

Quote:


> Originally Posted by *Krusher33*
> 
> Dog gum it.
> 
> I'm just going to leave it on the modded driver.


Can't have both. Theres a reason why the first post mentions unmodified drivers work best.


----------



## Krusher33

I ended up ditching the modded driver because it was buggy with Guild Wars 2. I'm back on 12.8.


----------



## mmonnin

That still counts as can't have both.


----------



## Krusher33

I think you misunderstood what I meant.


----------



## mmonnin

I think you misunderstand whats good for AMD folding. SDK 2.8 which was exactly what the modded drivers were trying to avoid with core 16 is good for core 17.


----------



## martinhal

I'm using 13.2 beta unmodded and pulling 42 ppd at stock . Gpu usage at 99 % with cpu chugging along at 30 ppd.


----------



## Krusher33

Quote:


> Originally Posted by *mmonnin*
> 
> I think you misunderstand whats good for AMD folding. SDK 2.8 which was exactly what the modded drivers were trying to avoid with core 16 is good for core 17.


Right. But I'm in the TC for the AMD cat. I have to be ready for anything. If these Beta units stops on me again, I have to be quick at switching to the normal units.

So I'm sticking with 12.8 driver till this comes out of Beta.


----------



## NBrock

This is madness. I was getting around 45-50k ppd on my FX-8350 @ 4.9 GHz in native linux. Right now in Windows 8 with the FX-8350 @ 4.9 and my HD 7970 @1050 MHz I am getting 65k ppd

Just so you know in Windows my FX only gets 15-20k ppd depending on the project. I have seen it get as high as 32k ppd.


----------



## NBrock

It's about time we get a core that runs well on our stuff.







I would like to see some 7990 ppd or some multi 7970 ppd.


----------



## Bal3Wolf

Quote:


> Originally Posted by *NBrock*
> 
> It's about time we get a core that runs well on our stuff.
> 
> 
> 
> 
> 
> 
> 
> I would like to see some 7990 ppd or some multi 7970 ppd.


\

I can pull between 95-110K with my 2x 7970s at 1150.


----------



## Krusher33

Just some numbers to give...

When I was on 13.1 drivers, I was getting 16.9k PPD, 4:43 TPF.
I'm on 12.8 drivers now and getting 16.3k PPD, 4:54 TPF.


----------



## WLL77

Hey mmonnin! Thanks for posting the info!
Since I switched to beta my PPD has gone from 12K to 38K.
Am running: 7870 (clocked at 1130/1220) - beta 3.2 version 6 drivers at 98% usage.
My 2500k is now running A4 with one or 2 cycles going to the card.








TPF on gpu is 3:47








W


----------



## cam51037

Quote:


> Originally Posted by *WLL77*
> 
> Hey mmonnin! Thanks for posting the info!
> Since I switched to beta my PPD has gone from 12K to 38K.
> Am running: 7870 (clocked at 1130/1220) - beta 3.2 version 6 drivers at 98% usage.
> My 2500k is now running A4 with one or 2 cycles going to the card.
> 
> 
> 
> 
> 
> 
> 
> 
> TPF on gpu is 3:47
> 
> 
> 
> 
> 
> 
> 
> 
> W


Wow, that's awesome.

But for some reason with the Catalyst 13.2 Beta 7 drivers my 7850 @ 1050/1450 only is getting at most, 20k PPD.


----------



## mmonnin

Quote:


> Originally Posted by *WLL77*
> 
> Hey mmonnin! Thanks for posting the info!
> Since I switched to beta my PPD has gone from 12K to 38K.
> Am running: 7870 (clocked at 1130/1220) - beta 3.2 version 6 drivers at 98% usage.
> My 2500k is now running A4 with one or 2 cycles going to the card.
> 
> 
> 
> 
> 
> 
> 
> 
> TPF on gpu is 3:47
> 
> 
> 
> 
> 
> 
> 
> 
> W


Good to hear. Be sure to post any problems as it is beta.

We need some new additions here at OCN


----------



## Bal3Wolf

gota love these new cores hope points stay this high when out of beta 2 7970s at 1150 net you over 100k a day.


----------



## ASSSETS

I think every AMD folder asked himself this question. What is my next AMD card?
Can you clock core higher on 7950 than 7970 and get better PPD?
Is it still core clock that make difference or something else?


----------



## Krusher33

Quote:


> Originally Posted by *Bal3Wolf*
> 
> gota love these new cores hope points stay this high when out of beta 2 7970s at 1150 net you over 100k a day.


Going into temporary retirement sooner than you thought, eh?

Quote:


> Originally Posted by *ASSSETS*
> 
> I think every AMD folder asked himself this question. What is my next AMD card?
> Can you clock core higher on 7950 than 7970 and get better PPD?
> Is it still core clock that make difference or something else?


Meh, I'm just going for a 7970.


----------



## Bal3Wolf

Quote:


> Originally Posted by *ASSSETS*
> 
> I think every AMD folder asked himself this question. What is my next AMD card?
> Can you clock core higher on 7950 than 7970 and get better PPD?
> Is it still core clock that make difference or something else?


clock alone wont be all you need to get more ppd the 7970 has more Compute Units not many more so not sure if it matters that much. I can tell you at 1150/1650 my 7970s stay around 8000-8300 per work for 49-52k each a day.

looks like a 7950 is 10-25% slower then a 7970 in compute perf even overclocking cant catch the 7970 it seems.
http://www.anandtech.com/show/5476/amd-radeon-7950-review/15
Quote:


> Originally Posted by *Krusher33*
> 
> Going into temporary retirement sooner than you thought, eh?
> Meh, I'm just going for a 7970.


Lol thats why im pushing it harder so i can befor it gets hot here and i am loving seeing over 100k today lol i thk i will end with like 140k.


----------



## ASSSETS

Quote:


> Originally Posted by *Krusher33*
> 
> Meh, I'm just going for a 7970.


Just because it is top card?


----------



## Gungnir

Quote:


> Originally Posted by *ASSSETS*
> 
> Just because it is top card?


The compute unit advantage the 7970 has over the 7950 is far more tangible in compute than in gaming; while the two are very close in games, the 7970 is much faster in compute (IIRC, 7950 Boost is ~3-3.5 TFLOPS, while the 7970Ghz is ~4.3 TFLOPS).


----------



## Krusher33

Quote:


> Originally Posted by *ASSSETS*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> Meh, I'm just going for a 7970.
> 
> 
> 
> Just because it is top card?
Click to expand...

No, what Bael and Gungir said. We're seeing 7970's doing 50k + and 7950's doing 40k +. Even if those numbers where 7970's being OC'd to crazy clocks and 7950's were at stocks... I don't see how OC'ing the 7950's to crazy clocks to make up the 10k + PPD.

And so... I'm aiming for a 7970.


----------



## ASSSETS

Quote:


> Originally Posted by *Gungnir*
> 
> The compute unit advantage the 7970 has over the 7950 is far more tangible in compute than in gaming; while the two are very close in games, the 7970 is much faster in compute (IIRC, 7950 Boost is ~3-3.5 TFLOPS, while the 7970Ghz is ~4.3 TFLOPS).


OK, this is a good answer. Do not have time for gaming, maybe 2 times a month.


----------



## Prymus

awe early retirement for the unlocked 6950.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> No, what Bael and Gungir said. We're seeing 7970's doing 50k + and 7950's doing 40k +. Even if those numbers where 7970's being OC'd to crazy clocks and 7950's were at stocks... I don't see how OC'ing the 7950's to crazy clocks to make up the 10k + PPD.
> 
> And so... I'm aiming for a 7970.


alot of 7970s will do 1150 on air and 7950s will do 1200-1250 i dont thk 100mhz will be enugh to catch a 7970 at 1150.


----------



## tictoc

My 7970 is currently clocked at 1175/1350, and I am seeing about 50k PPD. My only problem was FAHControl kept switching back to my 6870 slot, so I've lost 3 WUs that were over 75%. Looked back in my install, and realized i had forgotten to add the gpu-index to my config.xml the last time I re-installed FAHControl.

If I have 2 more days on these WUs, I will surpass my 6870's monthly best of 216,000.


----------



## Bal3Wolf

lol iv got some wierd results i decided to clock my gpus back to 1000/1650 with stock voltages to see how they perform and ran temp wise and i got this bug im sure but funny.


----------



## mmonnin

http://www.overclock.net/t/1367557/core-17-beta-wu/0_30#post_19439110
TPF 0.25s

And this is why FAHControl shouldn't be used for PPD.


----------



## Caleal

So what kind of PPD are GTX680s getting on p7662 WUs?

I'm hovering on the 50-52k range with my 580 @975mhz.
I've been experimenting with my OC though, so failed some WUs.

My GTX470 @825mhz is running around 32.5k ppd


----------



## Bal3Wolf

Quote:


> Originally Posted by *mmonnin*
> 
> http://www.overclock.net/t/1367557/core-17-beta-wu/0_30#post_19439110
> TPF 0.25s
> 
> And this is why FAHControl shouldn't be used for PPD.


Yea well i think i know how to reproduce it happens if you pause close to end of a frame and resume shows that quick time.


----------



## ASSSETS

I started to get bonus after 10 WU, time to overclock.


----------



## jesusboots

Quote:


> Originally Posted by *Caleal*
> 
> So what kind of PPD are GTX680s getting on p7662 WUs?
> 
> I'm hovering on the 50-52k range with my 580 @975mhz.
> I've been experimenting with my OC though, so failed some WUs.
> 
> My GTX470 @825mhz is running around 32.5k ppd


24k for a [email protected] 1250 I know its not a huge indicator. But its better than nothing. My 580 is getting what yours is.


----------



## NBrock

Quote:


> Originally Posted by *jesusboots*
> 
> 24k for a [email protected] 1250 I know its not a huge indicator. But its better than nothing. My 580 is getting what yours is.


So it seems the 5xx series does a good bit better with all the compute power they had vs the 6xx series.


----------



## Caleal

Quote:


> Originally Posted by *jesusboots*
> 
> 24k for a [email protected] 1250 I know its not a huge indicator. But its better than nothing. My 580 is getting what yours is.


Quote:


> Originally Posted by *NBrock*
> 
> So it seems the 5xx series does a good bit better with all the compute power they had vs the 6xx series.


I'm glad I held off on dropping a grand for a new EVGA GTX680 Classified, water block, and EVbot for my TC rig, cuz I'd have been WAY pissed off about now.









I do need to acquire a new card for the TC soon though, my poor little reference GTX580 seems to be showing signs of stress after nearly a year of folding 24/7 at 975+mhz @1.175v


----------



## jesusboots

I have been eyeing the titan. Though Donkey has already told me that I could not use it in the tc.









So I will probably replace my 580, with the exact same card. I have been running at 950 for the last 4-5 months.


----------



## Donkey1514

Quote:


> Originally Posted by *jesusboots*
> 
> I have been eyeing the titan. Though Donkey has already told me that I could not use it in the tc.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So I will probably replace my 580, with the exact same card. I have been running at 950 for the last 4-5 months.


I am just as mad as you because I would love to use my 690.....


----------



## k4m1k4z3

Quote:


> Originally Posted by *jesusboots*
> 
> I have been eyeing the titan. Though Donkey has already told me that I could not use it in the tc.


I have not seen any numbers yet... how does that card do with folding?
Too bad it wont be good for the TC though


----------



## mmonnin

Not even as good as 2x 580s.


----------



## jesusboots

Quote:


> Originally Posted by *k4m1k4z3*
> 
> I have not seen any numbers yet... how does that card do with folding?
> Too bad it wont be good for the TC though


Nothing. All rumor and speculation.


----------



## Krusher33

I have a 224 pts submission at the 3 am update on extreme overclocking stats site? When I went to bed at midnight, I just got credited for a unit at 5500 pts and I was on a unit that was to take 7 hours still.


----------



## mmonnin

p4729 or an error?


----------



## Krusher33

Apparently an error; first one I've received since starting Beta units:


Spoiler: Warning: Spoiler!



Code:



Code:


06:19:27:WU01:FS00:0x17:Completed 350000 out of 2500000 steps (14%)
06:26:49:WU01:FS00:0x17:NaNs found .. trying to pinpoint the NaN step via binary search... (this might take a while) 
06:26:49:WU01:FS00:0x17:Trying to isolate NaN....searching [290275,365266]
06:35:16:WU01:FS00:0x17:Trying to isolate NaN....searching [327771,365266]
06:39:30:WU01:FS00:0x17:Trying to isolate NaN....searching [346519,365266]
06:41:37:WU01:FS00:0x17:Trying to isolate NaN....searching [355893,365266]
06:42:41:WU01:FS00:0x17:Trying to isolate NaN....searching [360580,365266]
06:43:13:WU01:FS00:0x17:Trying to isolate NaN....searching [362924,365266]
06:43:29:WU01:FS00:0x17:Trying to isolate NaN....searching [364096,365266]
06:43:37:WU01:FS00:0x17:Trying to isolate NaN....searching [364682,365266]
06:43:41:WU01:FS00:0x17:Trying to isolate NaN....searching [364975,365266]
06:43:43:WU01:FS00:0x17:Trying to isolate NaN....searching [365121,365266]
06:43:44:WU01:FS00:0x17:Trying to isolate NaN....searching [365194,365266]
06:43:44:WU01:FS00:0x17:Trying to isolate NaN....searching [365231,365266]
06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365249,365266]
06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365258,365266]
06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365263,365266]
06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365265,365266]
06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365266,365266]
06:43:45:WU01:FS00:0x17:Unable to pinpoint NaN - likely to be non-deterministic, dumping results
06:43:45:WU01:FS00:0x17:ERROR:exception: NaNs detected in positions.0 0
06:43:45:WU01:FS00:0x17:Saving result file logfile_01.txt
06:43:45:WU01:FS00:0x17:Saving result file log.txt
06:43:45:WU01:FS00:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
06:43:46:WARNING:WU01:FS00:FahCore returned: BAD_WORK_UNIT (114 = 0x72)


----------



## labnjab

Quote:


> Originally Posted by *NBrock*
> 
> So it seems the 5xx series does a good bit better with all the compute power they had vs the 6xx series.


I'm getting 38k ppd on each of my 570's at 875 mhz


----------



## Caleal

Quote:


> Originally Posted by *Krusher33*
> 
> Apparently an error; first one I've received since starting Beta units:
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> 06:19:27:WU01:FS00:0x17:Completed 350000 out of 2500000 steps (14%)
> 06:26:49:WU01:FS00:0x17:NaNs found .. trying to pinpoint the NaN step via binary search... (this might take a while)
> 06:26:49:WU01:FS00:0x17:Trying to isolate NaN....searching [290275,365266]
> 06:35:16:WU01:FS00:0x17:Trying to isolate NaN....searching [327771,365266]
> 06:39:30:WU01:FS00:0x17:Trying to isolate NaN....searching [346519,365266]
> 06:41:37:WU01:FS00:0x17:Trying to isolate NaN....searching [355893,365266]
> 06:42:41:WU01:FS00:0x17:Trying to isolate NaN....searching [360580,365266]
> 06:43:13:WU01:FS00:0x17:Trying to isolate NaN....searching [362924,365266]
> 06:43:29:WU01:FS00:0x17:Trying to isolate NaN....searching [364096,365266]
> 06:43:37:WU01:FS00:0x17:Trying to isolate NaN....searching [364682,365266]
> 06:43:41:WU01:FS00:0x17:Trying to isolate NaN....searching [364975,365266]
> 06:43:43:WU01:FS00:0x17:Trying to isolate NaN....searching [365121,365266]
> 06:43:44:WU01:FS00:0x17:Trying to isolate NaN....searching [365194,365266]
> 06:43:44:WU01:FS00:0x17:Trying to isolate NaN....searching [365231,365266]
> 06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365249,365266]
> 06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365258,365266]
> 06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365263,365266]
> 06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365265,365266]
> 06:43:45:WU01:FS00:0x17:Trying to isolate NaN....searching [365266,365266]
> 06:43:45:WU01:FS00:0x17:Unable to pinpoint NaN - likely to be non-deterministic, dumping results
> 06:43:45:WU01:FS00:0x17:ERROR:exception: NaNs detected in positions.0 0
> 06:43:45:WU01:FS00:0x17:Saving result file logfile_01.txt
> 06:43:45:WU01:FS00:0x17:Saving result file log.txt
> 06:43:45:WU01:FS00:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
> 06:43:46:WARNING:WU01:FS00:FahCore returned: BAD_WORK_UNIT (114 = 0x72)


I got a few of those before I upped the GPU voltage on my 580. It seems to be stable now at 985mhz @1.2v(1.156v under load)


----------



## Krusher33

Personally I think it might be from my memory overclock. I had just finished messing with it before the error.


----------



## Caleal

Quote:


> Originally Posted by *Krusher33*
> 
> Personally I think it might be from my memory overclock. I had just finished messing with it before the error.


Well then, if you want to get personal about it, your Mother was a hamster and your father smells of elderberries!


----------



## Krusher33

Probably not far from the truth.


----------



## tictoc

I had my first bad beta WU. To bad it errored out when I was at 96%. 5 more minutes, and it would have been finished.










Spoiler: Log



Code:



Code:


18:56:50:WU01:FS01:0x17:Completed 1750000 out of 2500000 steps (70%)
19:01:35:WU01:FS01:0x17:Completed 1800000 out of 2500000 steps (72%)
19:06:14:WU01:FS01:0x17:Completed 1850000 out of 2500000 steps (74%)
19:10:54:WU01:FS01:0x17:Completed 1900000 out of 2500000 steps (76%)
19:15:39:WU01:FS01:0x17:Completed 1950000 out of 2500000 steps (78%)
19:20:20:WU01:FS01:0x17:Completed 2000000 out of 2500000 steps (80%)
19:24:59:WU01:FS01:0x17:Completed 2050000 out of 2500000 steps (82%)
19:29:45:WU01:FS01:0x17:Completed 2100000 out of 2500000 steps (84%)
19:34:25:WU01:FS01:0x17:Completed 2150000 out of 2500000 steps (86%)
19:39:04:WU01:FS01:0x17:Completed 2200000 out of 2500000 steps (88%)
19:43:50:WU01:FS01:0x17:Completed 2250000 out of 2500000 steps (90%)
19:48:30:WU01:FS01:0x17:Completed 2300000 out of 2500000 steps (92%)
19:53:09:WU01:FS01:0x17:Completed 2350000 out of 2500000 steps (94%)
19:57:49:WU01:FS01:0x17:Completed 2400000 out of 2500000 steps (96%)
20:13:01:WU01:FS01:0x17:NaNs found .. trying to pinpoint the NaN step via binary search... (this might take a while) 
20:13:01:WU01:FS01:0x17:Trying to isolate NaN....searching [2400209,2435806]
20:14:38:WU01:FS01:0x17:Trying to isolate NaN....searching [2418008,2435806]
20:15:27:WU01:FS01:0x17:Trying to isolate NaN....searching [2426908,2435806]
20:15:52:WU01:FS01:0x17:Trying to isolate NaN....searching [2431358,2435806]
20:16:04:WU01:FS01:0x17:Trying to isolate NaN....searching [2433583,2435806]
20:16:10:WU01:FS01:0x17:Trying to isolate NaN....searching [2434695,2435806]
20:16:13:WU01:FS01:0x17:Trying to isolate NaN....searching [2435251,2435806]
20:16:15:WU01:FS01:0x17:Trying to isolate NaN....searching [2435529,2435806]
20:16:16:WU01:FS01:0x17:Trying to isolate NaN....searching [2435668,2435806]
20:16:16:WU01:FS01:0x17:Trying to isolate NaN....searching [2435738,2435806]
20:16:16:WU01:FS01:0x17:Trying to isolate NaN....searching [2435773,2435806]
20:16:17:WU01:FS01:0x17:Trying to isolate NaN....searching [2435790,2435806]
20:16:17:WU01:FS01:0x17:Trying to isolate NaN....searching [2435799,2435806]
20:16:17:WU01:FS01:0x17:Trying to isolate NaN....searching [2435803,2435806]
20:16:17:WU01:FS01:0x17:Trying to isolate NaN....searching [2435805,2435806]
20:16:17:WU01:FS01:0x17:Trying to isolate NaN....searching [2435806,2435806]
20:16:17:WU01:FS01:0x17:Unable to pinpoint NaN - likely to be non-deterministic, dumping results
20:16:17:WU01:FS01:0x17:ERROR:exception: NaNs detected in positions.0 0
20:16:17:WU01:FS01:0x17:Saving result file logfile_01.txt
20:16:17:WU01:FS01:0x17:Saving result file log.txt
20:16:17:WU01:FS01:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
20:16:17:WARNING:WU01:FS01:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
20:16:17:WU01:FS01:Sending unit results: id:01 state:SEND error:FAULTY project:7662 run:0 clone:18 gen:7 core:0x17 unit:0x00000008ff3d48355139114ef534cf75
20:16:17:WU01:FS01:Uploading 3.10KiB to 171.67.108.149
20:16:17:WU01:FS01:Connecting to 171.67.108.149:8080
20:16:17:WU01:FS01:Upload complete
20:16:17:WU01:FS01:Server responded WORK_ACK (400)
20:16:17:WU01:FS01:Cleaning up
20:16:17:WU02:FS01:Connecting to assign-GPU.stanford.edu:80





First bad WU I have had in quite awhile.


----------



## mmonnin

Overclocks and temps ^^


----------



## tictoc

Quote:


> Originally Posted by *mmonnin*
> 
> Overclocks and temps ^^


OC could be unstable, but I don't think so. I will bump the voltage a bit just to be sure.


----------



## mmonnin

FAH will fail before anything else...


----------



## Caleal

Quote:


> Originally Posted by *tictoc*
> 
> OC could be unstable, but I don't think so. I will bump the voltage a bit just to be sure.


Even though the GPU temps run cooler, these seem to be harder on the OC.


----------



## tictoc

Oh I know that, FAH can put more stress on components than anything else, and the beta WUs seem to be a little bit harder on my card than the core_16 WUs were. This card was running the core_16 WUs and multiple BOINC projects at 1200/1550. One thing I really like about folding on the 7970 is that it runs about 6 degrees cooler folding than it ran on some of the BOINC projects.

I had been running my 7970 at 1225/1650 for day to day usage, benching, and gaming. Currently I am clocked at 1175/1365. I will monitor it and see what happens.


----------



## WLL77

Hello, checking in.
Beta WU's are still running well, 7870 at 1150/1220 is getting them done.
Have an anomaly with FHM. For some reason it shows frames completed at 98, yet I am still getting points. Figured FHM hasn't caught up with the beta WU's yet.
see pic below.


----------



## mmonnin

Mine is doing the same. HFM doesn't know how to read 2% log updates correctly yet it seems.


----------



## giganews35

Quote:


> Originally Posted by *Caleal*
> 
> I got a few of those before I upped the GPU voltage on my 580. It seems to be stable now at 985mhz @1.2v(1.156v under load)


Is it under water? Did you mod your own bios or use an existing to flash? I'm at 1Ghz at 1.15v under load it only gets 1.107-1.109v but personally I think it should be getting better ppd.


----------



## Caleal

Quote:


> Originally Posted by *giganews35*
> 
> Is it under water? Did you mod your own bios or use an existing to flash? I'm at 1Ghz at 1.15v under load it only gets 1.107-1.109v but personally I think it should be getting better ppd.


It is under water, but it is just a pure reference card, with the crappy power components, not a Hydrogen with beefed up capacitors and such like you have.
I modified the bios myself.


----------



## giganews35

Quote:


> Originally Posted by *Caleal*
> 
> It is under water, but it is just a pure reference card, with the crappy power components, not a Hydrogen with beefed up capacitors and such like you have.
> I modified the bios myself.


I gotcha. Looks like at this point for TC it's not worth getting a 6xx series.


----------



## Caleal

Quote:


> Originally Posted by *giganews35*
> 
> I gotcha. Looks like at this point for TC it's not worth getting a 6xx series.


Yeah, if the future is Core_17, and something doesn't change with the performance of 6xx series cards with it, sticking with a 580 is the way to go.
In my case, I'll likely be on the lookout for a deal on a better card, like a MSI Lightning, or EVGA Classified, being sold off by a gamer that is upgrading.
I'm kind of at the limit of what the power components on my reference card can do without letting the magic smoke out.


----------



## mmonnin

No doubt about it, core 17 is the future.


----------



## Prymus

Seems to be dry here and I'm getting the driver bug


----------



## Krusher33




----------



## WLL77

Quote:


> Originally Posted by *Prymus*
> 
> Seems to be dry here and I'm getting the driver bug


Am still getting WU's,, in fact just dl'd one bout 10 minutes ago.


----------



## tictoc

I just got a new 7662 2 mins ago.


----------



## labnjab

Quote:


> Originally Posted by *Prymus*
> 
> Seems to be dry here and I'm getting the driver bug


I've still been getting 7662, I actually just picked up 2 a little while ago. I just wish they would update hfm so we don't have to use fah control for ppd.


----------



## tictoc

HFM will calculate PPD if you change your options.

Edit\ Preferences\Options Calculate PPD Based on : Effective Rate
This is working for me, but my HFM history is not showing my actual points. It is just showing the 1600 base points without QRB.


----------



## labnjab

Quote:


> Originally Posted by *tictoc*
> 
> HFM will calculate PPD if you change your options.
> 
> Edit\ Preferences\Options Calculate PPD Based on : Effective Rate
> This is working for me, but my HFM history is not showing my actual points. It is just showing the 1600 base points without QRB.


Thank you for the info. That worked great. I'm not to worried about history as long as I can see what I getting on my current unit. I'm showing 37-38k ppd on each 570


----------



## Caleal

Quote:


> Originally Posted by *tictoc*
> 
> HFM will calculate PPD if you change your options.
> 
> Edit\ Preferences\Options Calculate PPD Based on : Effective Rate


Thanks, that did the trick!


The TPF, PPD and credit estimates sure shift around a lot on these though.


----------



## cam51037

Holy cow, with these 7662 WU's, my 7850 has 48k PPD, and my 670 Sig2 has around 51k PPD, that's awesome, hope it isn't just a bug.

Edit: Wait that was a bug, now my 7850 has 10.5k PPD. :/


----------



## Caleal

Quote:


> Originally Posted by *cam51037*
> 
> Holy cow, with these 7662 WU's, my 7850 has 48k PPD, and my 670 Sig2 has around 51k PPD, that's awesome, hope it isn't just a bug.
> 
> Edit: Wait that was a bug, now my 7850 has 10.5k PPD. :/


You sure about the numbers on that 670? That is way higher than others have been reporting with kepler cards.


----------



## Krusher33

Quote:


> Originally Posted by *Caleal*
> 
> Quote:
> 
> 
> 
> Originally Posted by *cam51037*
> 
> Holy cow, with these 7662 WU's, my 7850 has 48k PPD, and my 670 Sig2 has around 51k PPD, that's awesome, hope it isn't just a bug.
> 
> Edit: Wait that was a bug, now my 7850 has 10.5k PPD. :/
> 
> 
> 
> You sure about the numbers on that 670? That is way higher than others have been reporting with kepler cards.
Click to expand...

Yeah, and the 7850 should be more than that too. Are you on the latest drivers?


----------



## cam51037

Quote:


> Originally Posted by *Krusher33*
> 
> Yeah, and the 7850 should be more than that too. Are you on the latest drivers?


My 670 went back to around 25k PPD now, and newest drivers for both cards.


----------



## mmonnin

Sounds like you're using FAHControl, which has never been accurate, especially the first few frames and during computer usage.


----------



## Krusher33

Mine showed the card failed or something and it didn't have anything going. Tried clicking the Fold button, nothing happened. Quit the client and restarted and now it's going again.

It's not something I've ever seen before.


----------



## Krusher33

Quote:


> Originally Posted by *tictoc*
> 
> HFM will calculate PPD if you change your options.
> 
> Edit\ Preferences\Options Calculate PPD Based on : Effective Rate
> This is working for me, but my HFM history is not showing my actual points. It is just showing the 1600 base points without QRB.


I have mine like that. It still doesn't show PPD for me.


----------



## Bal3Wolf

What kinda ppd are most of you seeing with 7850s i got a 7850 [email protected]/1200 only showing 12k same as what my 5870 got.


----------



## Krusher33

I could have sworn people were reporting something like 20k but I guess I'm mistaken.


----------



## Bal3Wolf

i havet seen mine go about 12,991 in fahcontrol and hfm is reporting only 7800ppd.

Now its reporting 19k guess it takes alot longer then my 7970s to report higher ppd.


----------



## Krusher33

I wonder why my HFM isn't showing PPD like you guys. I do have it set to Effective Rate. Is there anything else to change?


----------



## Bal3Wolf

Under web settings did you change the project download address to http://fah-web.stanford.edu/psummaryC.html


----------



## Krusher33

Quote:


> Originally Posted by *Bal3Wolf*
> 
> Under web settings did you change the project download address to http://fah-web.stanford.edu/psummaryC.html


That did it! Thanks!


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> That did it! Thanks!


np lol thats hidden in this thread someplace i saw it a week ago and changed it.


----------



## aas88keyz

Been working these 7662's along with you all since the beginning and very happy with the ppd. even though I am not getting AMD numbers like it appears most of you are at I have practicing oc's with both smp virtual machine and gpu combined. My MSI 560 448 is @ 875/1750/2000 earning 28 kppd. I have more overclocking room that I will practice probably next week. Now I think my fx 8120 and FAHv7 might have had a falling out with each other since I probably am not at 80% success on my wu's. I am working to change that and hope to see better numbers on it. As of right now fx-8120 earning 16 kppd on linux virtual machine(same as my BE965 rig on native linux). I hope to have good points for both smp & gpu in good time.


----------



## ASSSETS

And Tools- Download projects from Standford.







After that I got it working.


----------



## Krusher33

Yeah that too.


----------



## labnjab

Did they up the nvidia cpu usage on these units? My smp has taken a huge ppd hit the last few units, but my gpus are holding steady at 38k ppd. Smp is now getting 5k ppd less on most units. I always run smp 6 and when i first started with the 7662 units my smp points were unchanged


----------



## aas88keyz

Quote:


> Originally Posted by *labnjab*
> 
> Did they up the nvidia cpu usage on these units? My smp has taken a huge ppd hit the last few units, but my gpus are holding steady at 38k ppd. Smp is now getting 5k ppd less on most units. I always run smp 6 and when i first started with the 7662 units my smp points were unchanged


Quote:


> Originally Posted by *aas88keyz*
> 
> Been working these 7662's along with you all since the beginning and very happy with the ppd. even though I am not getting AMD numbers like it appears most of you are at I have practicing oc's with both smp virtual machine and gpu combined. My MSI 560 448 is @ 875/1750/2000 earning 28 kppd. I have more overclocking room that I will practice probably next week. Now I think my fx 8120 and FAHv7 might have had a falling out with each other since I probably am not at 80% success on my wu's. I am working to change that and hope to see better numbers on it. As of right now fx-8120 earning 16 kppd on linux virtual machine(same as my BE965 rig on native linux). I hope to have good points for both smp & gpu in good time.


Forgot to mention my smp _is 7 core_ and I have to dedicate 1 for the gpu folding. I have tried to make the best balance of them both.


----------



## mmonnin

Quote:


> Originally Posted by *labnjab*
> 
> Did they up the nvidia cpu usage on these units? My smp has taken a huge ppd hit the last few units, but my gpus are holding steady at 38k ppd. Smp is now getting 5k ppd less on most units. I always run smp 6 and when i first started with the 7662 units my smp points were unchanged


Yes, see OP.


----------



## cam51037

Yeah, my 7850 is now at around 12k PPD. Points seem really low compared to the previous core, where I could get around 10k PPD, and I just took my 7850 out, because it wasn't worth it for the points-cost ratio.

Even my 670 has fairly low points, only around 26k down from around 40-45k on Core 15.


----------



## labnjab

I knew they used more cpu, but I was wondering if they bumped it up even more. I run smp 6 on my main rain and when I first started using core 17, my cpu usage went up to 95% but since I was running smp 6 my ppd were unchanged, but on the last few 7662s wy smp points dropped.

But now I just remembered I switched hfm to display effective rate and not all frames (like I usually run), and what hfm shows for points vary more then all frames. So my ppd actually haven't change, if I switch back to all frames they are where they normally are


----------



## Bal3Wolf

Quote:


> Originally Posted by *cam51037*
> 
> Yeah, my 7850 is now at around 12k PPD. Points seem really low compared to the previous core, where I could get around 10k PPD, and I just took my 7850 out, because it wasn't worth it for the points-cost ratio.
> 
> Even my 670 has fairly low points, only around 26k down from around 40-45k on Core 15.


im seeing between 16-19k on a 7850 i have at 1050mhz.

I love these units lol im ranked 17th now on ocn for 24hr avg i will hit my 15mil goal very soon it seems like.


----------



## jesusboots

I know this is a little late. This is in response to my friend Caleal.

My 670 climbs to 115k ppd, then drops to 24k. Which is what it averages. Stock speed.

My 580 gets a solid 50-52k ppd. I am fairly certain you know my clocks.


----------



## cam51037

I found a 580 3 GB locally today for $100. I think I might go pick it up tomorrow.

Reason it's so cheap is because the seller says that the screen keeps going black with it, which makes me think it's a fairly easy and fixable problem.


----------



## jesusboots

Do it. Theres a 570 locally for 125$ in working order, I am thinkig of getting just for the physix.


----------



## Caleal

Quote:


> Originally Posted by *jesusboots*
> 
> I know this is a little late. This is in response to my friend Caleal.
> 
> My 670 climbs to 115k ppd, then drops to 24k. Which is what it averages. Stock speed.
> 
> My 580 gets a solid 50-52k ppd. I am fairly certain you know my clocks.


Thanks, its a shame the kepler cards are taking such a hit from core_17.
It looks like there is maybe a 2 or 3k ppd spread, at most, among those of us in the TC with GTX580s folding at >950mhz. so like back when we were all folding 8018s for 20-21k ppd, he who has the least down time wins.









I just wish the TC stats were working reliably.


----------



## jesusboots

I have only been folding on my 580, outside of the disappointment of that first day of 670 folding, so my 24 hours is my 580 if you are curious to check.

And you are currently down.


----------



## Caleal

Quote:


> Originally Posted by *jesusboots*
> 
> And you are currently down.


Not really down, just in the midst of massive frustration with my TC rig liking to drop in and out of my WiFi network. I'm rearranging things this weekend so it is hard wired.
It is getting folding done, but sometimes takes several tries to get WUs uploaded, costing me hundreds of points on some WUs.


Spoiler: Warning: Spoiler!



02:18:13:Server connection id=867 on 0.0.0.0:36331 from 192.168.0.2
02:21:59:WU00:FS00:0x17:Completed 1250000 out of 2500000 steps (50%)
02:23:32:Server connection id=867 ended
02:23:40:Server connection id=868 on 0.0.0.0:36331 from 192.168.0.2
02:23:46:Server connection id=868 ended
02:25:58:Server connection id=869 on 0.0.0.0:36331 from 192.168.0.2
02:26:07:Server connection id=869 ended
02:26:26:WU00:FS00:0x17:Completed 1300000 out of 2500000 steps (52%)
02:26:36:Server connection id=870 on 0.0.0.0:36331 from 192.168.0.2
02:26:41:Server connection id=870 ended
02:28:37:Server connection id=871 on 0.0.0.0:36331 from 192.168.0.2
02:28:48:Server connection id=871 ended
02:30:21:Server connection id=872 on 0.0.0.0:36331 from 192.168.0.2
02:30:27:Server connection id=872 ended
02:30:35:Server connection id=873 on 0.0.0.0:36331 from 192.168.0.2
02:30:49:Server connection id=873 ended
02:31:03:Server connection id=874 on 0.0.0.0:36331 from 192.168.0.2
02:31:03:WU00:FS00:0x17:Completed 1350000 out of 2500000 steps (54%)
02:31:09:Server connection id=874 ended
02:35:31:WU00:FS00:0x17:Completed 1400000 out of 2500000 steps (56%)
02:39:58:WU00:FS00:0x17:Completed 1450000 out of 2500000 steps (58%)
02:44:36:WU00:FS00:0x17:Completed 1500000 out of 2500000 steps (60%)
02:49:04:WU00:FS00:0x17:Completed 1550000 out of 2500000 steps (62%)
02:53:31:WU00:FS00:0x17:Completed 1600000 out of 2500000 steps (64%)
02:57:59:WU00:FS00:0x17:Completed 1650000 out of 2500000 steps (66%)
03:02:36:WU00:FS00:0x17:Completed 1700000 out of 2500000 steps (68%)
03:07:04:WU00:FS00:0x17:Completed 1750000 out of 2500000 steps (70%)
03:11:32:WU00:FS00:0x17:Completed 1800000 out of 2500000 steps (72%)
03:16:10:WU00:FS00:0x17:Completed 1850000 out of 2500000 steps (74%)
03:20:38:WU00:FS00:0x17:Completed 1900000 out of 2500000 steps (76%)
03:25:06:WU00:FS00:0x17:Completed 1950000 out of 2500000 steps (78%)
03:29:43:WU00:FS00:0x17:Completed 2000000 out of 2500000 steps (80%)
03:34:11:WU00:FS00:0x17:Completed 2050000 out of 2500000 steps (82%)
03:38:39:WU00:FS00:0x17:Completed 2100000 out of 2500000 steps (84%)
03:43:07:WU00:FS00:0x17:Completed 2150000 out of 2500000 steps (86%)
03:47:44:WU00:FS00:0x17:Completed 2200000 out of 2500000 steps (88%)
******************************** Date: 15/03/13 ********************************
03:52:13:WU00:FS00:0x17:Completed 2250000 out of 2500000 steps (90%)
03:56:41:WU00:FS00:0x17:Completed 2300000 out of 2500000 steps (92%)
04:01:18:WU00:FS00:0x17:Completed 2350000 out of 2500000 steps (94%)
04:01:56:Server connection id=858 ended
04:05:46:WU00:FS00:0x17:Completed 2400000 out of 2500000 steps (96%)
04:10:13:WU00:FS00:0x17:Completed 2450000 out of 2500000 steps (98%)
04:14:52:WU00:FS00:0x17:Saving result file logfile_01.txt
04:14:52:WU00:FS00:0x17:Saving result file checkpointState.xml
04:14:53:WU00:FS00:0x17:Saving result file checkpt.crc
04:14:53:WU00:FS00:0x17:Saving result file log.txt
04:14:53:WU00:FS00:0x17:Saving result file positions.xtc
04:14:55:WU00:FS00:0x17:[email protected] Core Shutdown: FINISHED_UNIT
04:14:55:WU00:FS00:FahCore returned: FINISHED_UNIT (100 = 0x64)
04:14:55:WU00:FS00:Sending unit results: id:00 state:SEND error:NO_ERROR project:7662 run:30 clone:24 gen:11 core:0x17 unit:0x0000000fff3d483551392612173d77ad
04:14:55:WU00:FS00:Uploading 5.68MiB to 171.67.108.149
04:14:55:WU00:FS00:Connecting to 171.67.108.149:8080
04:14:56:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
04:14:56:WU01:FS00:News: Welcome to [email protected]
04:14:56:WU01:FS00:Assigned to work server 171.67.108.149
04:14:56:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:"GF110 [Geforce GTX 580]" from 171.67.108.149
04:14:56:WU01:FS00:Connecting to 171.67.108.149:8080
04:15:17:WARNING:WU01:FS00:WorkServer connection failed on port 8080 trying 80
04:15:17:WU01:FS00:Connecting to 171.67.108.149:80
04:15:28:WU00:FS00:Upload 2.20%
04:15:28:WARNING:WU00:FS00:Exception: Failed to send results to work server: Transfer failed
04:15:28:WU00:FS00:Trying to send results to collection server
04:15:28:WU00:FS00:Uploading 5.68MiB to 171.65.103.160
04:15:28:WU00:FS00:Connecting to 171.65.103.160:8080
04:15:38:ERROR:WU01:FS00:Exception: Failed to connect to 171.67.108.149:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:15:39:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
04:15:49:WARNING:WU00:FS00:WorkServer connection failed on port 8080 trying 80
04:15:49:WU00:FS00:Connecting to 171.65.103.160:80
04:16:00:WARNING:WU01:FS00:Failed to get assignment from 'assign-GPU.stanford.edu:80': Failed to connect to assign-GPU.stanford.edu:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:16:00:WU01:FS00:Connecting to assign-GPU.stanford.edu:8080
04:16:10:ERROR:WU00:FS00:Exception: Failed to connect to 171.65.103.160:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:16:10:WU00:FS00:Sending unit results: id:00 state:SEND error:NO_ERROR project:7662 run:30 clone:24 gen:11 core:0x17 unit:0x0000000fff3d483551392612173d77ad
04:16:10:WU00:FS00:Uploading 5.68MiB to 171.67.108.149
04:16:10:WU00:FS00:Connecting to 171.67.108.149:8080
04:16:21:WARNING:WU01:FS00:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Failed to connect to assign-GPU.stanford.edu:8080: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:16:21:ERROR:WU01:FS00:Exception: Could not get an assignment
04:16:31:WARNING:WU00:FS00:WorkServer connection failed on port 8080 trying 80
04:16:31:WU00:FS00:Connecting to 171.67.108.149:80
04:16:39:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
04:16:52:WARNING:WU00:FS00:Exception: Failed to send results to work server: Failed to connect to 171.67.108.149:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:16:52:WU00:FS00:Trying to send results to collection server
04:16:52:WU00:FS00:Uploading 5.68MiB to 171.65.103.160
04:16:52:WU00:FS00:Connecting to 171.65.103.160:8080
04:17:00:WARNING:WU01:FS00:Failed to get assignment from 'assign-GPU.stanford.edu:80': Failed to connect to assign-GPU.stanford.edu:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:17:00:WU01:FS00:Connecting to assign-GPU.stanford.edu:8080
04:17:21:WARNING:WU01:FS00:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Failed to connect to assign-GPU.stanford.edu:8080: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:17:21:ERROR:WU01:FS00:Exception: Could not get an assignment
04:18:16:WU01:FS00:Connecting to assign-GPU.stanford.edu:80
04:18:18:WU01:FS00:News: Welcome to [email protected]
04:18:18:WU01:FS00:Assigned to work server 171.67.108.149
04:18:18:WU01:FS00:Requesting new work unit for slot 00: READY gpu:0:"GF110 [Geforce GTX 580]" from 171.67.108.149
04:18:18:WU01:FS00:Connecting to 171.67.108.149:8080
04:18:18:WU01:FS00ownloading 1.62MiB
04:19:57:WU00:FS00:Upload 1.10%
04:19:57:ERROR:WU00:FS00:Exception: Transfer failed
04:19:57:WU00:FS00:Sending unit results: id:00 state:SEND error:NO_ERROR project:7662 run:30 clone:24 gen:11 core:0x17 unit:0x0000000fff3d483551392612173d77ad
04:19:57:WU00:FS00:Uploading 5.68MiB to 171.67.108.149
04:19:57:WU00:FS00:Connecting to 171.67.108.149:8080
04:20:18:WARNING:WU00:FS00:WorkServer connection failed on port 8080 trying 80
04:20:18:WU00:FS00:Connecting to 171.67.108.149:80
04:20:39:WARNING:WU00:FS00:Exception: Failed to send results to work server: Failed to connect to 171.67.108.149:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:20:39:WU00:FS00:Trying to send results to collection server
04:20:39:WU00:FS00:Uploading 5.68MiB to 171.65.103.160
04:20:39:WU00:FS00:Connecting to 171.65.103.160:8080
04:21:21:WU00:FS00:Upload 3.30%
04:21:21:ERROR:WU00:FS00:Exception: Transfer failed
04:21:34:WU00:FS00:Sending unit results: id:00 state:SEND error:NO_ERROR project:7662 run:30 clone:24 gen:11 core:0x17 unit:0x0000000fff3d483551392612173d77ad
04:21:34:WU00:FS00:Uploading 5.68MiB to 171.67.108.149
04:21:34:WU00:FS00:Connecting to 171.67.108.149:8080
04:21:55:WARNING:WU00:FS00:WorkServer connection failed on port 8080 trying 80
04:21:55:WU00:FS00:Connecting to 171.67.108.149:80
04:22:16:WARNING:WU00:FS00:Exception: Failed to send results to work server: Failed to connect to 171.67.108.149:80: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
04:22:16:WU00:FS00:Trying to send results to collection server
04:22:16:WU00:FS00:Uploading 5.68MiB to 171.65.103.160
04:22:16:WU00:FS00:Connecting to 171.65.103.160:8080
04:22:37:WARNING:WU00:FS00:WorkServer connection failed on port 8080 trying 80
04:22:37:WU00:FS00:Connecting to 171.65.103.160:80
04:22:41:WU00:FS00:Upload 1.10%
04:23:04:WU00:FS00:Upload 2.20%
04:23:04:ERROR:WU00:FS00:Exception: Transfer failed
04:24:11:WU00:FS00:Sending unit results: id:00 state:SEND error:NO_ERROR project:7662 run:30 clone:24 gen:11 core:0x17 unit:0x0000000fff3d483551392612173d77ad
04:24:11:WU00:FS00:Uploading 5.68MiB to 171.67.108.149
04:24:11:WU00:FS00:Connecting to 171.67.108.149:8080
04:24:31:WU00:FS00:Upload 1.10%
04:24:37:WU00:FS00:Upload 9.91%
04:24:43:WU00:FS00:Upload 17.62%
04:24:49:WU00:FS00:Upload 25.33%
04:24:55:WU00:FS00:Upload 33.04%
04:25:03:WU00:FS00:Upload 38.54%
04:25:09:WU00:FS00:Upload 44.05%
04:25:15:WU00:FS00:Upload 51.76%
04:25:21:WU00:FS00:Upload 59.47%
04:25:27:WU00:FS00:Upload 67.17%
04:25:33:WU00:FS00:Upload 75.98%
04:25:39:WU00:FS00:Upload 83.69%
04:25:45:WU00:FS00:Upload 92.50%
04:25:51:WU00:FS00:Upload 100.00%
04:25:56:WU00:FS00:Upload complete
04:25:56:WU00:FS00:Server responded WORK_ACK (400)
04:25:56:WU00:FS00:Final credit estimate, 7993.00 points
04:25:56:WU00:FS00:Cleaning up
04:30:05:Lost lifeline PID 1788, exiting
04:30:08:Server connection id=1 ended


----------



## joker927

Pulling 43k ppd on my 7950 and that's ignoring the smp -8 also running in a linux VM. These numbers can't last for long, right? My Nvidia rigs are jealous.


----------



## mmonnin

We'll have to wait to see how QRB ends up being finalized but I don't envision a 7970 going down with QRB. Maybe some of the slower cards.


----------



## tictoc

Quote:


> Originally Posted by *mmonnin*
> 
> We'll have to wait to see how QRB ends up being finalized but I don't envision a 7970 going down with QRB. Maybe some of the slower cards.


Not to re-hash a long standing debate about equal points for equal work, but what is the point of gimping points on AMD cards. If the 7970 does x-times the amount of work as a slower GPU, than why should it get the same points? I also can't fathom how FahBench is representative of GPU performance. FahBench is the only performance metric that has a 570 being equal to a 7970.

(puts on tinfoil hat) Maybe team EVGA would throw a fit if AMD GPUs out produced Nvidia GPUs, and Stanford would lose a bunch of casual Nvidia folders.









Regardless of how QRB or points are awarded on the final core_17, i just hope that the actual science being done is increased with the new WU. If the point scheme doesn't reflect the work being done, why would I ever upgrade my hardware, on a dedicated machine, without being able to gauge the amount of work I was volunteering?


----------



## mmonnin

Who says AMD is gimped? core 17 runs the same WU as the Nvidia cards and is worth the same points.

FAHBench basically IS core_17 so it is completely relative and the best metric for core17 TPFs.

Core 17 is completely NEW science. Explicit WUs that have never been able to be completed before. Not only will core 17 work across AMD and Nvidia platforms but CPUs and iGPUs (which the HD4000 kick the pants off a 3770k). I'm guessing anything that supports OpenCL.

Put your conspiracy theories away until the QRB is finalized. That type of thought process belongs over at EVGA with the other idiots that have no logical train of thought.


----------



## tictoc

I was only replying to your idea of QRB on lower end hardware and not on the 7970.









If the core_17 WU is the best compromise for performance on all hardware then I am happy with it, and it is cool that they have built a core that can run on all platforms.

The conspiracy theory was a joke.


----------



## Caleal

I for one welcome our new AMD GPU folding overlords.


----------



## mmonnin

QRB is not linear so if AMD or Nvidia card is slower than the benchmark machine then its PPD may be significant less compared to older WUs. And the same goes on the other end of the spectrum, where the 7970 sits so I doubt it will have points lower than the older core.

A new core isn't always about best performance on all sets of hardware or even on one set a hardware. Its one set of code that Stanford has to manage instead of core 11,15, and 16. They don't have unlimited personnel and this allows scientists to do more sciency things. Here is a great post from proteneer regarding core 17:
http://www.evga.com/forums/tm.aspx?high=&m=1877857&mpage=7#1882086


----------



## mkclan

Hi! Ok, I install 7.3.6 and 13.2beta7, but when run f @ h client it works with core16. I am doing something wrong, or what?
Sorry my english

Solved, just add a client type -beta
And my 7850 get 16k+ PPD


----------



## cam51037

I'd love higher QRB on certain cards. My 7850 is rolling 12k PPD currently, which is only 1-2k PPD more than Core 15. :/


----------



## joker927

Anyone seen this behavior before? My clocks are jumping up and down, killing my ppd. All I do is set %power limit down to +19 then back to +20 and it's stable at 1130 again for hours, days even sometimes.


----------



## mmonnin

If anything it's too hot and throttling and nothing to do with the core.


----------



## Hackcremo

My 7870 XT (975/1500) on p7662 getting about 24k ppd. Do my ppd looks normal..??


----------



## martinhal

Quote:


> Originally Posted by *Hackcremo*
> 
> My 7870 XT (975/1500) on p7662 getting about 24k ppd. Do my ppd looks normal..??


Have no idea of how your card compares to 7970's my three are getting 52K ppd each at 1250/1645


----------



## WLL77

Quote:


> Originally Posted by *Hackcremo*
> 
> My 7870 XT (975/1500) on p7662 getting about 24k ppd. Do my ppd looks normal..??


Looks good to me. My 7870, is currently pulling 23,668ppd on p7662.


----------



## Kevdog

I seem to be getting slightly better TPF and PPD with the 13.3 beta driver on my unlocked 6950, and it runs at a constant 99% if I dont touch it


----------



## Krusher33

Quote:


> Originally Posted by *Kevdog*
> 
> I seem to be getting slightly better TPF and PPD with the 13.3 beta driver on my unlocked 6950, and it runs at a constant 99% if I dont touch it


Is that coming from 12.8 or 13.1? Mine already does that on 13.1 driver.


----------



## Kevdog

Quote:


> Originally Posted by *Krusher33*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Kevdog*
> 
> I seem to be getting slightly better TPF and PPD with the 13.3 beta driver on my unlocked 6950, and it runs at a constant 99% if I dont touch it
> 
> 
> 
> Is that coming from 12.8 or 13.1? Mine already does that on 13.1 driver.
Click to expand...

I was using the 13.1 and it would only get to 98% but would fluctuate from 96%


----------



## giganews35

Not sure if QRB changed but It looks like my tpf went down to 1:15 for 125k+ ppd. I haven't had any incorrect readings with v7 just yet with these units. But of course it could be just that. Lets hope its correct









edit: NVM too good to be true.. lol


----------



## Krusher33

Quote:


> Originally Posted by *Kevdog*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Kevdog*
> 
> I seem to be getting slightly better TPF and PPD with the 13.3 beta driver on my unlocked 6950, and it runs at a constant 99% if I dont touch it
> 
> 
> 
> Is that coming from 12.8 or 13.1? Mine already does that on 13.1 driver.
> 
> Click to expand...
> 
> I was using the 13.1 and it would only get to 98% but would fluctuate from 96%
Click to expand...

Hmmm... I think before I started up my smp linux folding for the foldathon, I was getting a solid 99% using 13.1. I just now checked and it appears that the linux virtual folding is causing my GPU usage to bounce all over the place. That's using 7 of my 8 cores. I wonder if it's because I'm using 1/2 a module. I'm going to switch to 6 cores smp and see what happens.

Update: Well shoot... it actually improved my PPD for both switching from 7 cores to 6 cores. But the GPU usage is a constant 97%. I might give 13.3 beta a go later.


----------



## 1337LutZ

After i updated to 13.3 drivers my 5870 has been getting this TPF for the last 7%.


----------



## giganews35

On my 580 for about ~4% I was getting a TPF 1:15 for 125kppd. Then it went back to normal.


----------



## 1337LutZ

Quote:


> Originally Posted by *giganews35*
> 
> On my 580 for about ~4% I was getting a TPF 1:15 for 125kppd. Then it went back to normal.


Yea, for me too now, damnit. xD


----------



## Tarnix

It's funny, I was talking with friends about how bad AMD was at folding.. then I saw this article the day after... I'll probably buy a HD8950/70 when it comes out.. :3


----------



## Krusher33

Quote:


> Originally Posted by *Tarnix*
> 
> It's funny, I was talking with friends about how bad AMD was at folding.. then I saw this article the day after... I'll probably buy a HD8950/70 when it comes out.. :3


I may be getting things mixed up again but I think those are going to be another year away.


----------



## mmonnin

5/6 series AMD GPUs still sucks for folding


----------



## tictoc

^^ Yes they do. My 6870 gets only marginally more PPD on the core_17, than it got on the core_16.


----------



## Krusher33

Yeah this does not bode well for the AMD cat in the TC with the 7970 getting much more pts than any other card. It'll become a bit of income class warfare of sorts


----------



## mmonnin

Yeah, the average is somewhere between the versions of 11292. 7.7/9.2k for me and core 17 is at 8.7k. So maybe slightly up. I guess now OCing it will actually give more PPD every day with QRB instead of completing an extra WU every like 30 days.


----------



## IvantheDugtrio

So far I've been getting about 20k PPD with my HD 7870 GHz edition on stock clocks on Catalyst 13.2 beta 7. My i5 3570k is set to 3 cores and gets about 10k PPD on it's own. MSi Afterburner reports that the HD 7870 stays at a constant 99% load.

I'm thinking of throwing my GTX 660 back into the mix since it worked quite well back before core_17 came out. My only concern is if [email protected] has driver issues again if I have both catalyst and forceware installed. Back when I had both my HD 7870 and my GTX 660 installed in the same system I was only getting some ~600 PPD out of the 7870 and another 30k PPD off of the GTX 660. Before I got the GTX 660 (for hybrid PhysX







) the core_16 WU would always fail on the HD 7870.

I'll see how things go and throw it in anyways. I have driver fusion ready to cleanup the nvidia drivers if I find out it doesn't work well.


----------



## Caleal

Quote:


> Originally Posted by *Krusher33*
> 
> Yeah this does not bode well for the AMD cat in the TC with the 7970 getting much more pts than any other card. It'll become a bit of income class warfare of sorts


Well, it will become more like the rest of the categories, no more treading water with 5 year old hardware.

I'm glad the AMD category can actually make some serious points now. I think it may liven things up a bit, and possibly make AMD folders more enthusiastic about flogging their hardware 24/7.
It certainly will be interesting now that teams AMD folder recruitment goal will be driven by something more than just the desire to not get 0's.


----------



## Krusher33

Quote:


> Originally Posted by *mmonnin*
> 
> Yeah, the average is somewhere between the versions of 11292. 7.7/9.2k for me and core 17 is at 8.7k. So maybe slightly up. I guess now OCing it will actually give more PPD every day with QRB instead of completing an extra WU every like 30 days.


For real. In preparation of selling it, I've switch back to default bios. My clock has dropped to 950 from 1050. My PPD dropped to 14.8 from 16.8. For a little while I was doing 1075 clock and was over 17 k. Sad to drop down that much but I didn't want to run into problems at the last minute.


----------



## WLL77

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> So far I've been getting about 20k PPD with my HD 7870 GHz edition on stock clocks on Catalyst 13.2 beta 7. My i5 3570k is set to 3 cores and gets about 10k PPD on it's own.


I believe you can set your i5 3570k back to four cores. The beta core 17 no longer needs one dedicated core. Atleast that is my experience running a i5 2500k and a 7870


----------



## ASSSETS

Upgrading to 13.3 beta 2 made my GPU usage 99% instead of 98% on 13.1 (card: AMD 5870)
But CPU usage went from 0-1% to 15-17%


----------



## Krusher33

Quote:


> Originally Posted by *ASSSETS*
> 
> Upgrading to 13.3 beta 2 made my GPU usage 99% instead of 98% on 13.1 (card: AMD 5870)
> But CPU usage went from 0-1% to 15-17%


I was just about to say this. Yay for increasing GPU to its fullest but for whatever reason the CPU usage bounced up as well. Still only half as much as it was on Core 16's but still...


----------



## Hackcremo

Quote:


> Originally Posted by *WLL77*
> 
> I believe you can set your i5 3570k back to four cores. The beta core 17 no longer needs one dedicated core. Atleast that is my experience running a i5 2500k and a 7870


In my case, core 17 require 1 core from my 2700K to stay at 99% load. If used all 8 core for SMP folding, the load of gpu start to floating up and down..


----------



## ASSSETS

It was a problem to back to 13.1 using just regular uninstall. FAH was getting bad units right away and keep failing. Also gadgets were getting Stop respond on load. Used atiman to fix.


----------



## Tarnix




----------



## WLL77

Quote:


> Originally Posted by *Hackcremo*
> 
> In my case, core 17 require 1 core from my 2700K to stay at 99% load. If used all 8 core for SMP folding, the load of gpu start to floating up and down..


Sorry I should have been more detailed, and not considered my own experience as general across the board.
Using beta 13.2 version 6, I get 99% gpu usage, with 1-2% cpu usage.

W.


----------



## jomama22

Can someone help me get this. I have 7.3.6 but I can not get the beta WUS. I have the beta flag (client-type/advanced) and went through 2 WUS on each of my GPUs yet it will not load the beta. Do I need to set vendor even thou I am on 7.2.6 or am I just doing something wrong?

So much for the foldathon lol.


----------



## ASSSETS

client-type/beta
and passkey in slot or in client settings for bonus.


----------



## jomama22

Quote:


> Originally Posted by *ASSSETS*
> 
> client-type/beta


In 7.3.6, the "beta" is changed to "advanced" for whatever reason.

Edit: nope....I was wrong, didn't realize I had to hit enter to get the "beta" to stick. I just wasted the entire foldathon by thinking it was "advanced" and not "beta"...


----------



## ASSSETS

But you can still keep folding


----------



## jomama22

Quote:


> Originally Posted by *ASSSETS*
> 
> But you can still keep folding


Very True









Think i am doing much better now...if only...


----------



## Krusher33




----------



## jomama22

Quote:


> Originally Posted by *Krusher33*


Should of saw my jaw when I first saw it lol.

Its running a consistent 175kppd with the 3 [email protected] ~46k ppd each and the [email protected] 4.7.

The 3960x is only getting ~40k, does this seem low? I have it running all 12 threads. The wu is 7809...which is a big one.

Also, when running the 7970s @ 1250 I didn't see any performance increase, is that normal on these betas?

At least I don't have to run the heat here in Denver...should off set the power bill right? Right?!


----------



## martinhal

Quote:


> Originally Posted by *jomama22*
> 
> Very True
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Think i am doing much better now...if only...


the 7970's are loving this







How many PPD do you get on that 3960 if you don't use the GPU's ?


----------



## jomama22

Quote:


> Originally Posted by *martinhal*
> 
> the 7970's are loving this
> 
> 
> 
> 
> 
> 
> 
> How many PPD do you get on that 3960 if you don't use the GPU's ?


I will let you know in an hour and a half when the core17s are done. Seems to be a 50/50 chance of my cores restarting where they left off when I pause them.

I am also extending my "foldathon" to 12:00 pm mtn time so I can feel better about not contributing because of my stupid ass haha. Was running 24kppd for the foldathon because I don't know how to type beta apparently.


----------



## martinhal

Quote:


> Originally Posted by *jomama22*
> 
> I will let you know in an hour and a half when the core17s are done. Seems to be a 50/50 chance of my cores restarting where they left off when I pause them.
> 
> I am also extending my "foldathon" to 12:00 pm mtn time so I can feel better about not contributing because of my stupid ass haha. Was running 24kppd for the foldathon because I don't know how to type beta apparently.


No hurry rather get them points







I also only recently found out about the beta flag ..... 3x 7970 and a i7 3770 at 5.1 getting 42 ppd before that... now 180k ppd .


----------



## jomama22

Quote:


> Originally Posted by *martinhal*
> 
> No hurry rather get them points
> 
> 
> 
> 
> 
> 
> 
> I also only recently found out about the beta flag ..... 3x 7970 and a i7 3770 at 5.1 getting 42 ppd before that... now 180k ppd .


Nice! Yea it looks like I joined [email protected] at the perfect time for the 7970. I won't lie, I heard a rumor that the 7970s [email protected] performance had been "fixed" so I decided to check out this [email protected] thing and see what they meant, so really, it was hearing that they were good folders that drove me to start.

Its fun to tinker around with to be honest.

Now, next month I will have my other computer going for the foldathon (2600k, replace the 6970s with 2 cheap 580s) and can hopefully break 300k total.

How much does ram speed affect [email protected] performance?


----------



## jesusboots

Quote:


> Originally Posted by *jomama22*
> 
> The 3960x is only getting ~40k, does this seem low? I have it running all 12 threads. The wu is 7809...which is a big one.


Yea, Something is wrong there.


Try asking him, Sporadic E, or Anubis, they all fold on 3930ks. I have not folded on mine in some time so I am of no help, I stopped when they switched the requirement for qrb on them. However, those three, they can help you.


----------



## Donkey1514

Quote:


> Originally Posted by *jesusboots*
> 
> Yea, Something is wrong there.
> 
> 
> Try asking him, Sporadic E, or Anubis, they all fold on 3930ks. I have not folded on mine in some time so I am of no help, I stopped when they switched the requirement for qrb on them. However, those three, they can help you.












reduce your threads to 11, leaving a thread for the GPU's to use. If not, the GPUs will steal cycles from your SMP folding causing it to slow down drastically


----------



## CloudX

Quote:


> Originally Posted by *Donkey1514*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> reduce your threads to 11, leaving a thread for the GPU's to use. If not, the GPUs will steal cycles from your SMP folding causing it to slow down drastically


Should this be done on the SMP with just one GPU folding the beta WU?


----------



## jesusboots

Quote:


> Originally Posted by *CloudX*
> 
> Should this be done on the SMP with just one GPU folding the beta WU?


Good God Man, why are you not on one of the teams?


----------



## Bal3Wolf

Quote:


> Originally Posted by *jesusboots*
> 
> Good God Man, why are you not on one of the teams?


Lol not everyone wants to be on a team


----------



## jesusboots

Fair enough. All for the same purpose.


----------



## Donkey1514

Quote:


> Originally Posted by *CloudX*
> 
> Should this be done on the SMP with just one GPU folding the beta WU?


even just one beta unit on your GPU can drastically drop the ppd of your CPU.


----------



## jomama22

Quote:


> Originally Posted by *Donkey1514*
> 
> even just one beta unit on your GPU can drastically drop the ppd of your CPU.


I was under the impression that on core 17, and GPUs are only using 1-2% of a core. I will try 11 in just a bit.


----------



## Donkey1514

Quote:


> Originally Posted by *jomama22*
> 
> I was under the impression that on core 17, and GPUs are only using 1-2% of a core. I will try 11 in just a bit.


still need to leave a thread available to the GPUs


----------



## jomama22

Quote:


> Originally Posted by *Donkey1514*
> 
> still need to leave a thread available to the GPUs


OK cool. Thanks.


----------



## CloudX

Quote:


> Originally Posted by *Donkey1514*
> 
> still need to leave a thread available to the GPUs


Thanks, I'll have to adjust for this then.

About the team thing, I don't think I was ever asked to join a team. I don't usually fold my sig rig and the teams seem to want i7s and fast GPUs. I just love the tournaments, pretty bummed I missed a couple in early 2012 and now I won't have a 2012 badge.. Booo!

Not this year, going to hit them all!

If any teams are recruiting and you think I could be of use, hit me up!


----------



## Caleal

Quote:


> Originally Posted by *jomama22*
> 
> In 7.3.6, the "beta" is changed to "advanced" for whatever reason.
> 
> Edit: nope....I was wrong, didn't realize I had to hit enter to get the "beta" to stick. I just wasted the entire foldathon by thinking it was "advanced" and not "beta"...


Another victim of Kevdog's Law.


----------



## [CyGnus]

I have been away from folding but just saw this thread and decided to give it a shot, i am amazed that AMD GPU's finally work decently my 7870 @ 1200 is doing 27.4k on this P7662.


----------



## jomama22

Well after some tweaking and bumping of clocks, i have gotten over 200k.

7970 x3 @ 1250/1800
3960x @ 4.7



219028 PPD
The 7970s are runing at 60k/58k/57k ppd while the 3960x is pulling a weak 44k.

I have noticed that with each completed wu, the ppd gets better. I believe the 3960x will hit ~60k soon, as i have seen it spike to 57k on different projects (I have been stuck with 7809 twice in a row! grrr)

I want 240k...I will get there....Then -bigadv here we come!


----------



## Donkey1514

Quote:


> Originally Posted by *jomama22*
> 
> Well after some tweaking and bumping of clocks, i have gotten over 200k.
> 
> 7970 x3 @ 1250/1800
> 3960x @ 4.7
> 
> 
> 
> 219028 PPD
> The 7970s are runing at 60k/58k/57k ppd while the 3960x is pulling a weak 44k.
> 
> I have noticed that with each completed wu, the ppd gets better. I believe the 3960x will hit ~60k soon, as i have seen it spike to 57k on different projects (I have been stuck with 7809 twice in a row! grrr)
> 
> I want 240k...I will get there....Then -bigadv here we come!


Hopefully your chip does 4.8ghz+ to meet the deadlines for bigadv..... It'll also require either native linux or a vm too. The VM will allow you to keep folding on the 7970s but I doubt you'll meet the deadline for bigadv, even at 4.8ghz. If you decide to go native then you won't be able to fold on your 7970s.

Just food for thought


----------



## jomama22

Quote:


> Originally Posted by *Donkey1514*
> 
> Hopefully your chip does 4.8ghz+ to meet the deadlines for bigadv..... It'll also require either native linux or a vm too. The VM will allow you to keep folding on the 7970s but I doubt you'll meet the deadline for bigadv, even at 4.8ghz. If you decide to go native then you won't be able to fold on your 7970s.
> 
> Just food for thought


I haven't done too much reading on the subject to really get the grasp of it yet, but from what you just told me, I will have to weigh and compare which setup is best.

Thanks for the info. This is a ton of fun to be honest and it has proved to be a great stability test.

Cheers


----------



## jesusboots

Sorry about offtopic.

But, you have two 3960 builds, one with 3 7970's the other with 2 6950s?


----------



## jomama22

Quote:


> Originally Posted by *jesusboots*
> 
> Sorry about offtopic.
> 
> But, you have two 3960 builds, one with 3 7970's the other with 2 6950s?


No I haven't updated that in a year or so. There is a 2600k in there right now


----------



## jesusboots

I am still impressed.


----------



## GarTheConquer

Whoa! I just started folding yesterday for the first time and I have 2x7970s in my main rig. Can anyone tell me how to set up some epic ppd numbers like everyone is talking about in this thread? Or maybe someone could Teamviewer it for me March 21?


----------



## [CyGnus]

AMD is working really well with these core 17







here is a pic of my 7870 on HFM


----------



## cam51037

Quote:


> Originally Posted by *GarTheConquer*
> 
> Whoa! I just started folding yesterday for the first time and I have 2x7970s in my main rig. Can anyone tell me how to set up some epic ppd numbers like everyone is talking about in this thread? Or maybe someone could Teamviewer it for me March 21?


I could Team View it for you, fellow Saskatchewonian.


----------



## GarTheConquer

Quote:


> Originally Posted by *cam51037*
> 
> I could Team View it for you, fellow Saskatchewonian.


Haha nice! Should I install the 13.2 beta Catalyst first? I will PM you.

Edit: Oh I guess it's 13.3

Edit2: Hey thanks a lot Cam +rep! I will see once my current WUs are complete when I get home tonight and report back here.


----------



## Tarnix

Oh man it hurts when you have been folding for two days and you realize just now that your FAH config borked and that you were folding as anonymous T_T

I also had to reduce my CPU slots to 2/8, those fah17 cores eats 13% per gpu x_X I don't want to use more than 50-60% of my cpu since I have only one running rig..


----------



## ASSSETS

Quote:


> Originally Posted by *Tarnix*
> 
> Oh man it hurts when you have been folding for two days and you realize just now that your FAH config borked and that you were folding as anonymous T_T
> 
> I also had to reduce my CPU slots to 2/8, those fah17 cores eats 13% per gpu x_X I don't want to use more than 50-60% of my cpu since I have only one running rig..


what is your amd cat version?


----------



## Bal3Wolf

Quote:


> Originally Posted by *ASSSETS*
> 
> what is your amd cat version?


hes using a nvida card looks like thats normal on core 17 for them.


----------



## Wheezo

Are these units effected by OCing memory? Or is it like the past where memory overclock does nothing to improve TPFs, anyone know?

(asking mostly in regards to AMD cards)


----------



## ASSSETS

Had also this question, but did not tested it.


----------



## Krusher33

Quote:


> Originally Posted by *Wheezo*
> 
> Are these units effected by OCing memory? Or is it like the past where memory overclock does nothing to improve TPFs, anyone know?
> 
> (asking mostly in regards to AMD cards)


From personal experience I think it depends on how high you're clocked. When I had my card at 1050mhz, I was seeing raising the memory did have a good impact.

But now that I'm back to stock bios, not so much.

I don't remember the PPD numbers though.


----------



## Wheezo

Awesome, thanks Krusher. Im going to try a Mem OC next WU. My card isn't a very good OCer as far as 7870s go, only get to about 1145 core stable. [CyGnus] is getting way more PPD, trying to figure out if it's my drivers or strictly his superior OC. Still I'm not complaining about 23k PPD.


----------



## [CyGnus]

I have my card at 1200/1375 with cat 13.3 beta3 on [email protected] 7.2.9 and its pumping 28k PPD (I have only the GPU Folding since it gives more PPD than my CPU i had to choose one to fold 24/7)


----------



## Wheezo

Quote:


> Originally Posted by *[CyGnus]*
> 
> I have my card at 1200/1375 with cat 13.3 beta3 on [email protected] 7.2.9 and its pumping 28k PPD (I have only the GPU Folding since it gives more PPD than my CPU i had to choose one to fold 24/7)


Yeah I have been studying your screen capture







. My drivers are 13.2 BETA 5. No way I can get an OC out of my 7870 that you can. Might try to update my drivers tonight. I am seeing a lot of dips to 0% usage, I know that is normal every frame but mine are happening more frequently, biting into my PPD.

Thanks for the info Cygnus. You are kind of my yardstick for 7662, trying to get closer to your PPD.


----------



## [CyGnus]

Try to update those drivers maybe it will solve something. My card is at a constant 99% only goes to 0% when its sending the wu. Try to up that core, memory is not important bring it to 1350/1375 mem and up with the core (around 1150/1200 ).

What temps are you getting when folding? My card is 43/44ªc at 55% fan speed


----------



## Wheezo

Quote:


> Originally Posted by *[CyGnus]*
> 
> Try to update those drivers maybe it will solve something. My card is at a constant 99% only goes to 0% when its sending the wu. Try to up that core, memory is not important bring it to 1350/1375 mem and up with the core (around 1150/1200 ).
> 
> What temps are you getting when folding? My card is 43/44ªc at 55% fan speed


Wow, yeah mine is dipping pretty often, even with CPU6. Temps are pretty good, 57C with 100% fan. Clocks are 1145/ 1450 right now. I will definitely update drivers to 13.3 Beta 3s tonight.

Thanks for the help







I'll post if all that improves my TPF, I'm betting the just the driver update will.


----------



## ASSSETS

Quote:


> Originally Posted by *[CyGnus]*
> 
> Try to update those My card is at a constant 99% only goes to 0% when its sending the wu.


I also would like same stability.


----------



## ASSSETS

We should test it using FAHBench


----------



## ASSSETS

Here is what I got.


----------



## Caleal

I'm having an odd issue.
As I bump the OC up on my GTX580, I'm not seeing the kind of ppd gains past 1000mhz as I was moving up from 975 to 1000 mhz.

At 1050mhz it isn't failing WU's, and GPU-Z is showing that the OC correctly, but it hasn't done much noticeable to my PPD.


----------



## jomama22

Quote:


> Originally Posted by *Caleal*
> 
> I'm having an odd issue.
> As I bump the OC up on my GTX580, I'm not seeing the kind of ppd gains past 1000mhz as I was moving up from 975 to 1000 mhz.
> 
> At 1050mhz it isn't failing WU's, and GPU-Z is showing that the OC correctly, but it hasn't done much noticeable to my PPD.


I have noticed that you won't see the PPD jump immediately, you have to wait for the current core to finish before it will see a boost.

I could be wrong, but this happened on more than one occasion for me.


----------



## Caleal

Quote:


> Originally Posted by *jomama22*
> 
> I have noticed that you won't see the PPD jump immediately, you have to wait for the current core to finish before it will see a boost.
> 
> I could be wrong, but this happened on more than one occasion for me.


This is across multiple WUs. After bumping the OC, I'll typically let it run for a full day before making another change.

This is a dedicated folding card, installed in a rig that only exists to support the card, so folding never gets disrupted on it.

I did switch to the latest drivers a few days ago, because I want to play with the folding benchmark that was posted recently, but I haven't gotten around to running it yet.

My PPD is actually lower at 1050mhz than it was a few days ago at 1000mhz, so I may try reverting the drivers to the older version to see if that changes anything..

It is a reference design card, so I'm actually kind of shocked that it is actually running at 1050 mhz, and that the voltage I'm pumping through it isn't causing it to shut down on the OCP.
I modified the bios to up the voltage limits, but didn't disable OCP.


----------



## jesusboots

I've averaged 52k ppd for almost a week now. This is at 935 after I had to restart (failed, then refolded) the two units I show as failed.

I'm not certain what types of gains you are seeing, but I've get two 48k days, then a 57k day. Back and forth.


Its hard to see what everyone else's gains per unit have been with no updates page.

edit: and the drivers. Ill check them, but they haven't been updated in a while. Drivers are 304.79


----------



## GarTheConquer

I'm still new to this, but here's an update. I installed Catalyst 13.3 and Cam adjusted my GPU folding. Does everything look good?
What's with HFM?









http://imageshack.us/photo/my-images/692/mfolding2.jpg/


----------



## [CyGnus]

In HFM go to Edit -> Preferences-> Options and set the Calculate PPD to-> Effective Rate then in Web Settings Tab Paste this http://fah-web.stanford.edu/psummaryC.html in Adress click OK and finally go to Tools -> Download Projects and restart HFM.net.
Make sure you have the right flag set in [email protected] (client-type/beta)


----------



## Hackcremo

I am getting 3.28 minute TPF on p7662 with overclocked 7870 XT @ 1150/1500mhz speed. Nice ppd and less heat.


----------



## GarTheConquer

Quote:


> Originally Posted by *[CyGnus]*
> 
> In HFM go to Edit -> Preferences-> Options and set the Calculate PPD to-> Effective Rate then in Web Settings Tab Paste this http://fah-web.stanford.edu/psummaryC.html in Adress click OK and finally go to Tools -> Download Projects and restart HFM.net.
> Make sure you have the right flag set in [email protected] (client-type/beta)


Awesome, thanks!!!


----------



## Renegadesl1

Quote:


> Originally Posted by *Caleal*
> 
> Quote:
> 
> 
> 
> Originally Posted by *jomama22*
> 
> I have noticed that you won't see the PPD jump immediately, you have to wait for the current core to finish before it will see a boost.
> 
> I could be wrong, but this happened on more than one occasion for me.
> 
> 
> 
> This is across multiple WUs. After bumping the OC, I'll typically let it run for a full day before making another change.
> 
> This is a dedicated folding card, installed in a rig that only exists to support the card, so folding never gets disrupted on it.
> 
> I did switch to the latest drivers a few days ago, because I want to play with the folding benchmark that was posted recently, but I haven't gotten around to running it yet.
> 
> My PPD is actually lower at 1050mhz than it was a few days ago at 1000mhz, so I may try reverting the drivers to the older version to see if that changes anything..
> 
> It is a reference design card, so I'm actually kind of shocked that it is actually running at 1050 mhz, and that the voltage I'm pumping through it isn't causing it to shut down on the OCP.
> I modified the bios to up the voltage limits, but didn't disable OCP.
Click to expand...

My 550's would do that same stuff. after the 3XX of drivers it would lock the clock at 1000 mhz. I would set the clock at 1025 in afterburner and gpu-z would see it at that clock also, but once i closed gpu-z and reopened it would be back to reporting 999.8 mhz.

Hope this helps,
-Ren


----------



## d3cryptncompute

What should one expect from the AMD 6990 Desktop GPU to yield? I'm getting ~14k PPD on Catalyst 13.1.


----------



## Caleal

Quote:


> Originally Posted by *Renegadesl1*
> 
> My 550's would do that same stuff. after the 3XX of drivers it would lock the clock at 1000 mhz. I would set the clock at 1025 in afterburner and gpu-z would see it at that clock also, but once i closed gpu-z and reopened it would be back to reporting 999.8 mhz.
> 
> Hope this helps,
> -Ren


Thanks, that was it.

The 3xx drivers seem to cap the GPU clock speed at 999.8 mhz.
I'll have to do some checking into a way around that...

Unfortunately the core_17 WU's seem to instantly fail with any of the pre 3xx drivers I've tried.


----------



## spice003

so i'm getting about 21k ppd with core on my 560ti, so it suppose to go up, becuase i can get 25k ppd with a stock core.?


----------



## GarTheConquer

Quote:


> Originally Posted by *d3cryptncompute*
> 
> What should one expect from the AMD 6990 Desktop GPU to yield? I'm getting ~14k PPD on Catalyst 13.1.


Dude! Install Catalyst 13.3 and for your GPU slot under "Configure"-"Slots" scroll to the bottom and for "Extra Slot Options" add Name as "client-type" and Value as "beta"

It put my main rig @ 100K PPD


----------



## [CyGnus]

How much PPD can i expect from my 3570K doing SMP while folding with the GPU? 20k ?


----------



## cam51037

Quote:


> Originally Posted by *[CyGnus]*
> 
> How much PPD can i expect from my 3570K doing SMP while folding with the GPU? 20k ?


Meh, my 3570k is folding 2 cores with my GTX 670, and it's only getting around 8k PPD @ 4.4 GHz.


----------



## [CyGnus]

Ok then i am doing alright mine is set to 4 and its giving 17-20K the 7870 is pumping another 26-28k


----------



## cam51037

Quote:


> Originally Posted by *[CyGnus]*
> 
> Ok then i am doing alright mine is set to 4 and its giving 17-20K the 7870 is pumping another 26-28k


Not sure how all of you are getting such high PPD on 78xx/79xx cards, when my 7850 clocked to 7870 speeds is only getting 12k PPD at most.


----------



## [CyGnus]

Have you set the flag to client-type / beta ? Make sure you are on Core 17 Project 7662 after a while folding (30min) you should see the numbers though i dont know what a 7850 does my 7870 is at 1200core


----------



## WLL77

Ok so I might be an anomoly, but with my 7870 at 1175/1350, bete 13.2 v4 and i5 2500k clocked at 4.2 using all 4 cores, have been averaging bout 40k ppd.
7870 averages around 24k
i5 2500k averages 16k
Again this just my singular experience on the core _17 units.


----------



## d3cryptncompute

Updated to 13.3, total PPD updated to ~24k PPD. This is far from 40k and 50k and I'm not overclocking.


----------



## arvidab

Quote:


> Originally Posted by *d3cryptncompute*
> 
> Updated to 13.3, total PPD updated to ~24k PPD. This is far from 40k and 50k and I'm not overclocking.


With a 6990 that's where you'd expect be, ~12k per core. Some has gotten as high as 15k PPD on a 6970 but they are also OC'd a fair bit and iirc the 6990's core have a lower stock clock than 6970. The 40-50k is numbers quoted for 79x0. 5000- and 6000-series have considerably less computing power.


----------



## [CyGnus]

Quote:


> Originally Posted by *WLL77*
> 
> Ok so I might be an anomoly, but with my 7870 at 1175/1350, bete 13.2 v4 and i5 2500k clocked at 4.2 using all 4 cores, have been averaging bout 40k ppd.
> 7870 averages around 24k
> i5 2500k averages 16k
> Again this just my singular experience on the core _17 units.


That sounds about right


----------



## jomama22

Well I just pushed my best 7970 to 1320/1800 and am now getting ~63000 PPD. It is consistently ~3000 PPD higher than the 1250/1800 oc of the other two. So a 5% gain for ~6% overclock.

Cant wait to put this thing in tc.


----------



## [CyGnus]

63k Brutal PPD


----------



## ASSSETS

Quote:


> Originally Posted by *jomama22*
> 
> Well I just pushed my best 7970 to 1320/1800 and am now getting ~63000 PPD. It is consistently ~3000 PPD higher than the 1250/1800 oc of the other two. So a 5% gain for ~6% overclock.
> 
> Cant wait to put this thing in tc.


NICE NUMBERS!


----------



## cam51037

Quote:


> Originally Posted by *[CyGnus]*
> 
> Have you set the flag to client-type / beta ? Make sure you are on Core 17 Project 7662 after a while folding (30min) you should see the numbers though i dont know what a 7850 does my 7870 is at 1200core


Yeah, after folding on the beta projects for 3+ hours it only estimates around 12k PPD.


----------



## Wheezo

Quote:


> Originally Posted by *cam51037*
> 
> Yeah, after folding on the beta projects for 3+ hours it only estimates around 12k PPD.


Check your GPU usage (MSI Afterburner graph etc) and make sure it isn't dipping and is using all of the GPU. You should be getting more than 12k.

In regards to yesterday, I couldn't increase my PPD much (23k - 24k) with an update to 13.3 Beta 3 and a bit more OCing. I still get massive drops once and a while, but it's still pretty sweet PPD so I'll live with it. I do have a GT430 powering my secondary monitor and also folding, so maybe that is the cause.


----------



## CloudX

Quote:


> Originally Posted by *Wheezo*
> 
> Check your GPU usage (MSI Afterburner graph etc) and make sure it isn't dipping and is using all of the GPU. You should be getting more than 12k.
> 
> In regards to yesterday, I couldn't increase my PPD much (23k - 24k) with an update to 13.3 Beta 3 and a bit more OCing. I still get massive drops once and a while, but it's still pretty sweet PPD so I'll live with it. I do have a GT430 powering my secondary monitor and also folding, so maybe that is the cause.


24k total or just on the 7870?


----------



## Wheezo

Just the 7870 gives me 23k - 24k (1150 core / 1350 memory), the old 920 is at stock speed right now, needs new TIM and I am waiting on a new cooler so it doesn't pull much.


----------



## CloudX

Quote:


> Originally Posted by *Wheezo*
> 
> Just the 7870 gives me 23k - 24k (1150 core / 1350 memory), the old 920 is at stock speed right now, needs new TIM and I am waiting on a new cooler so it doesn't pull much.


That is really good. I only get 28k with my 670 on the beta units.


----------



## [CyGnus]

This core 17 is really good for AMD card's i hope from now one they all will be like this









Here is a pic of my PPD numbers with SMP + GPU


----------



## GarTheConquer

I'm still new to this (not sure if this is the right place to ask) but what is the difference between SMP and CPU for my processor slot?
Is this picture all good?
What about the dips in the Afterburner graph?

http://imageshack.us/photo/my-images/26/agamemnonfolding5.jpg/


----------



## jomama22

Quote:


> Originally Posted by *GarTheConquer*
> 
> I'm still new to this (not sure if this is the right place to ask) but what is the difference between SMP and CPU for my processor slot?
> Is this picture all good?
> What about the dips in the Afterburner graph?
> 
> http://imageshack.us/photo/my-images/26/agamemnonfolding5.jpg/


If you are using 7.3.6 then SMP is done automatically. You simply just need to adjust the core count used in the slots tab under CPU. If you are on core17, leave one core open for all of your GPUs (not one core per GPU, but 1 for all) if on an amd 7xxx series.

I have noticed dips on each GPU down to ~30% then right back to 99-98%. Hasn't caused any issues. I believe the op says that every frame will cause the GPUs to drop for just a few seconds.


----------



## cam51037

Quote:


> Originally Posted by *GarTheConquer*
> 
> I'm still new to this (not sure if this is the right place to ask) but what is the difference between SMP and CPU for my processor slot?
> Is this picture all good?
> What about the dips in the Afterburner graph?
> 
> http://imageshack.us/photo/my-images/26/agamemnonfolding5.jpg/


SMP and CPU are the same thing.

If SMP=5, 5 of your CPU cores are being folded on.

That Afterburner picture doesn't look right though. Should be a stable 100% usage... Not sure what the problem is there.


----------



## GarTheConquer

Quote:


> Originally Posted by *cam51037*
> 
> SMP and CPU are the same thing.
> 
> If SMP=5, 5 of your CPU cores are being folded on.
> 
> That Afterburner picture doesn't look right though. Should be a stable 100% usage... Not sure what the problem is there.


Quote:


> Originally Posted by *jomama22*
> 
> If you are using 7.3.6 then SMP is done automatically. You simply just need to adjust the core count used in the slots tab under CPU. If you are on core17, leave one core open for all of your GPUs (not one core per GPU, but 1 for all) if on an amd 7xxx series.
> 
> I have noticed dips on each GPU down to ~30% then right back to 99-98%. Hasn't caused any issues. I believe the op says that every frame will cause the GPUs to drop for just a few seconds.


Thanks guys, sounds good, I won't worry about the dips then.

I will change my open cores back to -1 as well.


----------



## d3cryptncompute

Quote:


> Originally Posted by *GarTheConquer*
> 
> Thanks guys, sounds good, I won't worry about the dips then.
> 
> I will change my open cores back to -1 as well.


I found that User Account Control popups, put display to sleep, and logging in cause a dip in GPU load.


----------



## scubadiver59

Still burning in my new 7950s and 8350, and I haven't gotten to the Beta's yet; however, is 65-85% load normal for the 7950s in stock form? Does the "% usage" look normal? It's not as bad as others I've seen...I'm just curious.

Running a P11292 & P11293.



Edit: Forgot to mention that I'm running 13.3Beta, Win7 Enterprise, 16GB, R7950 OC 3GB Twin Frozr III's, [email protected] 7.3.6


----------



## jomama22

Quote:


> Originally Posted by *scubadiver59*
> 
> Still burning in my new 7950s and 8350, and I haven't gotten to the Beta's yet; however, is 65-85% load normal for the 7950s in stock form? Does the "% usage" look normal? It's not as bad as others I've seen...I'm just curious.
> 
> Running a P11292 & P11293.


My 7970s sat at 80% when using those projects (core16) core 17 will give you 98-99% usage


----------



## scubadiver59

Everything is hunky-dory in Foldsville this morning!

I finished the last folds, two posts above, and then set the system off again this morning...and what a difference a setting makes: I think I got a total of 14k PPD yesterday folding the CPU and two 7950s, but now I'm getting an Est. PPD of 78.7k on stock settings (this is a new machine and I haven't OC'd yet)! Woohoo!!







Now running at 98-99% usage, it was 65-80% before, and temperatures are up about 5-10*C on each card

The only question I have left is this: why did my CPU pull down a P7610 when there are no flags set to reference the "beta"? New time for the CPU (0xa4) is 14hr with a 6420 PPD estimate, practically double the time it took yesterday.

I'm folding on 8 cores, I only folded on 6 yesterday due to the 7950 work loads dropping dramatically if I didn't free up two cores; today, I'm trying all 8 (-1 setting) and 7 of the cores are running at around 90% but one is floating around 50-75%. Same info as before: 13.3Beta, Win7 Enterprise, 16GB, R7950 OC 3GB Twin Frozr III's, [email protected] 7.3.6.

CPU: P 760 / TPF 8m 42s / Est. PPD 6375 / Est. Credit 3855
G0: P7662 / TPF 2m 54s / Est. PPD 35759 / Est. Credit 7222
G1: P7662 / TPF 2m 52s / Est. PPD 36543 / Est. Credit 7275

Edit: HFM isn't reporting any PPD for the two graphics card...but I gather this is normal.


----------



## [CyGnus]

*scubadiver59* in HFM go to Edit -> Preferences-> Options and set the Calculate PPD to-> Effective Rate then in Web Settings Tab Paste this http://fah-web.stanford.edu/psummaryC.html in Adress click OK and finally go to Tools -> Download Projects and restart HFM.net.
Make sure you have the right flag set in [email protected] (client-type/beta)


----------



## WLL77

Quote:


> Originally Posted by *[CyGnus]*
> 
> Make sure you have the right flag set in [email protected] (client-type/beta)


Also make sure the flag is set for your gpu's only.
via the "slots" tab.


----------



## scubadiver59

Quote:


> Originally Posted by *WLL77*
> 
> Also make sure the flag is set for your gpu's only.
> via the "slots" tab.


Trust me, there are no flags set for the CPU...I made sure of that, so don't ask me why the 7xxx downloaded for the CPU.

Quote:


> Originally Posted by *[CyGnus]*
> 
> *scubadiver59* in HFM go to Edit -> Preferences-> Options and set the Calculate PPD to-> Effective Rate then in Web Settings Tab Paste this http://fah-web.stanford.edu/psummaryC.html in Adress click OK and finally go to Tools -> Download Projects and restart HFM.net.
> Make sure you have the right flag set in [email protected] (client-type/beta)


Already had the correct web settings from a previous posting in this thread, but I did have to change the calculations to "effective rate". Thanks, it's reporting correctly now!


----------



## [CyGnus]




----------



## Donkey1514

Quote:


> Originally Posted by *scubadiver59*
> 
> Trust me, there are no flags set for the CPU...I made sure of that, so don't ask me why the 7xxx downloaded for the CPU.
> Already had the correct web settings from a previous posting in this thread, but I did have to change the calculations to "effective rate". Thanks, it's reporting correctly now!


you'll want the bigbeta flag for your 4P


----------



## scubadiver59

Quote:


> Originally Posted by *Donkey1514*
> 
> you'll want the bigbeta flag for your 4P


Ahh...okay. I'll put that in tomorrow night.


----------



## jesusboots

Quote:


> Originally Posted by *scubadiver59*
> 
> Trust me, there are no flags set for the CPU...I made sure of that, so don't ask me why the 7xxx downloaded for the CPU.


This happened to 4thkor on my team iirc on a 580, but it may have been a 560. He deleted the cpu unit an all was well.


----------



## scubadiver59

Quote:


> Originally Posted by *jesusboots*
> 
> This happened to 4thkor on my team iirc on a 580, but it may have been a 560. He deleted the cpu unit an all was well.


Every bit counts, but it is annoying. Is there a flag I can put in there, something like "no beta"?


----------



## jesusboots

Not certain, you can ask him and he may be able to help.


----------



## Asustweaker

I didn't read every post, so i'm just gonna ask.

Am i supposed to be seeing 100% of one cpu thread used for the GPU core???

cuz this is what i'm working so far. Had to run my SMP at SMP6


----------



## arvidab

Doh...


----------



## arvidab

You're running Nvidias, that's how they behave on _17. Their opencl driver is to blame afaik.


----------



## PR-Imagery

Can you chose between cuda and opencel WUs on 17?


----------



## Donkey1514

Quote:


> Originally Posted by *PR-Imagery*
> 
> Can you chose between cuda and opencel WUs on 17?


nope, just OpenCL.


----------



## PR-Imagery

that's silly.


----------



## Donkey1514

Quote:


> Originally Posted by *PR-Imagery*
> 
> that's silly.


from my understanding it's open source vs Cuda being closed source, which makes it easier to code... but I may be completely wrong


----------



## PR-Imagery

I'd rather have the best performance possible








If power wasn't an issue, I'd rather run all my capable hardware at its max without having to sacrifice anything.


----------



## Asustweaker

So the openCL at driver level uses an entire core? Damn it. Any way, the GPU's ended up leveling off 3k ppd lower than the 802x WU's. But the temps were almost 20% lower. I usually see mid to high 50s, these stayed around 45c. Only plus side i can see for the Nvidia 480's. Afaik, from what i've read, the 480 was the only 4 series card that would run the beta's with good results. well back to the fermi "optimized" 80xx's for me.


----------



## LostKauz

holy crap 350k ppd!


----------



## ASSSETS

it is glitch, few min and it will go back to normal


----------



## Donkey1514

Quote:


> Originally Posted by *LostKauz*
> 
> holy crap 350k ppd!


----------



## LostKauz

i know it was a joke. geez guys i typically get 60k


----------



## Caleal

Quote:


> Originally Posted by *Asustweaker*
> 
> Afaik, from what i've read, the 480 was the only 4 series card that would run the beta's with good results. well back to the fermi "optimized" 80xx's for me.


My 470 gets a little over 34k ppd on the core_17 beta WUs, which is better than it gets on anything else.
It's running at 850mhz though.


----------



## labnjab

Thats really good for a 470, my classified 570's at 875 get 38k ppd each on core 17


----------



## Caleal

Quote:


> Originally Posted by *labnjab*
> 
> Thats really good for a 470, my classified 570's at 875 get 38k ppd each on core 17


It's a real trooper too, other than during power outages, it has been folding non stop for 2 straight years.
It's water cooled though, so never gets much over 40ºC.


----------



## Bal3Wolf

Quote:


> Originally Posted by *LostKauz*
> 
> i know it was a joke. geez guys i typically get 60k


lol iv seen mine gltich and show like 3bil befor lol can only wish for that much in a day.


----------



## scubadiver59

Okay...I'm testing out my b-in-law's GTX580 lightnings before I send them off to him (b-day present)...here's the question: to Beta or not to Beta?

It's just a test, but people have been tossing conflicting data back and forth. Once I'm done, they go back in the boxes and then off in the mail.

Then I'm back to my 580 TF II and 560Ti...sigh...


----------



## jomama22

Quote:


> Originally Posted by *scubadiver59*
> 
> Okay...I'm testing out my b-in-law's GTX580 lightnings before I send them off to him (b-day present)...here's the question: to Beta or not to Beta?
> 
> It's just a test, but people have been tossing conflicting data back and forth. Once I'm done, they go back in the boxes and then off in the mail.
> 
> Then I'm back to my 580 TF II and 560Ti...sigh...


Man I should of married your sister! Lol


----------



## scubadiver59

Quote:


> Originally Posted by *jomama22*
> 
> Man I should of married your sister! Lol


Dude, I wouldn't wish my sister on my worst enemy!!

It's a long story, but she is not worthy of anyone after the way she treated my b-in-law...that's why he gets a new computer consisting of the following:

Corsair Carbine 500R
AMD FX 8320
Gigabyte 970A-UD3 mobo
8GB Corsair Vengaeance 1866 DDR3
2x MSI GTX580 Lightning Twin Frozr III's
Samsung 830 SSD
LG BH16 Blu-Ray Player/Burner

...and she gets squat.


----------



## cam51037

Quote:


> Originally Posted by *scubadiver59*
> 
> Okay...I'm testing out my b-in-law's GTX580 lightnings before I send them off to him (b-day present)...here's the question: to Beta or not to Beta?
> 
> It's just a test, but people have been tossing conflicting data back and forth. Once I'm done, they go back in the boxes and then off in the mail.
> 
> Then I'm back to my 580 TF II and 560Ti...sigh...


1st of all, send him the 560 Ti as a b-day present, thank me later.








2nd of all, the beta units on a 580 are known to get them around 50k PPD I think, so go beta!


----------



## scubadiver59

Anyway I decided to run the beta's on both and this is what's projected (2600K 6T, 2x GTX580s):

GPU1, P7662, Est PPD 40,849, TPF 2:39, ETA 4:19, @ 65* (air) (70.10.17.00.06 BIOS)
GPU2, P7662, Est PPD 17,156, TPF 4:45, ETA 7:44, @ 69* (air) (70.10.20.00.00 BIOS)
CPU, P7809, Est PPD 20,904, TPF 8:40, ETA 14:18, @63* (air)

20.5* ambient

Can't say I like the PPD disparity between the two 580s. Any reason for this?

Edit: Everything is stock.


----------



## scubadiver59

Quote:


> Originally Posted by *cam51037*
> 
> 1st of all, send him the 560 Ti as a b-day present, thank me later.
> 
> 
> 
> 
> 
> 
> 
> 
> 2nd of all, the beta units on a 580 are known to get them around 50k PPD I think, so go beta!


I have another two 580 Lightning TF III's (and a 560Ti Physx) of my own that I'm installing over the weekend to replace my two 560 SuperClocks.

No, he can have them.


----------



## cam51037

Try running only 4 threads on the CPU, may help the one GPU's PPD.


----------



## scubadiver59

Quote:


> Originally Posted by *scubadiver59*
> 
> Anyway I decided to run the beta's on both and this is what's projected (2600K 6T, 2x GTX580s):
> 
> GPU1, P7662, Est PPD 40,849, TPF 2:39, ETA 4:19, @ 65* (air) (70.10.17.00.06 BIOS)
> GPU2, P7662, Est PPD 17,156, TPF 4:45, ETA 7:44, @ 69* (air) (70.10.20.00.00 BIOS)
> CPU, P7809, Est PPD 20,904, TPF 8:40, ETA 14:18, @63* (air)
> 
> 20.5* ambient
> 
> Can't say I like the PPD disparity between the two 580s. Any reason for this?
> 
> Edit: Everything is stock.


Hate to quote myself...sigh...but everything has settled in

New numbers:

GPU1, P7662, Est PPD 38,857, TPF 2:45, @ 66* (air) (70.10.17.00.06 BIOS)
GPU2, P7662, Est PPD 39,394, TPF 2:43, @ 66* (air) (70.10.20.00.00 BIOS)
CPU, P7809, Est PPD 20,871, TPF 8:40, ETA 14:18, @63* (air)

20.5* ambient


----------



## scubadiver59

Quote:


> Originally Posted by *cam51037*
> 
> Try running only 4 threads on the CPU, may help the one GPU's PPD.


I'll try the four and see what occurs...

Edit: both 580's jumped up in to the low 40's...a mediocre jump, but a jump nonetheless.


----------



## sub50hz

Are you guys finding that memory clocks bump PPD in any worthwhile manner on 7970s? I'm pulling ~52k PPD at 1250/1375, but I've had stable results with the memory clocked at 1500 -- never tried any higher. I'm willing to juice the core voltage higher to get closer to 1400 if need be, and I'll add SMP back into the mix after the next few WUs. This was a great guide, many thanks to the OP.

P.S. Sleeping with earplugs helps if you're running the reference cooler







.


----------



## Donkey1514

Quote:


> Originally Posted by *sub50hz*
> 
> Are you guys finding that memory clocks bump PPD in any worthwhile manner on 7970s? I'm pulling ~52k PPD at 1250/1375, but I've had stable results with the memory clocked at 1500 -- never tried any higher. I'm willing to juice the core voltage higher to get closer to 1400 if need be, and I'll add SMP back into the mix after the next few WUs. This was a great guide, many thanks to the OP.
> 
> P.S. Sleeping with earplugs helps if you're running the reference cooler
> 
> 
> 
> 
> 
> 
> 
> .


What drivers are you running? I saw a 500 ppd bump from +75mhz memory increase


----------



## 47 Knucklehead

Quote:


> Originally Posted by *mmonnin*
> 
> 5/6 series AMD GPUs still sucks for folding


This. My 6970's only pull about 1/2 as much as my GTX 580's ... even with the core 17 WU's.

I wish they would bring back the 8057 WU for nVidia. I miss the 100K PPD that it gave.


----------



## sub50hz

Quote:


> Originally Posted by *Donkey1514*
> 
> What drivers are you running? I saw a 500 ppd bump from +75mhz memory increase


13.2 beta 7, IIRC.


----------



## Donkey1514

Quote:


> Originally Posted by *47 Knucklehead*
> 
> This. My 6970's only pull about 1/2 as much as my GTX 580's ... even with the core 17 WU's.


still better than before


----------



## tictoc

From what I have seen with my 7970 memory clocks do have a small effect on PPD.

I down-clocked my memory from 1375 to 1050, to reduce heat, and my points for that work unit were 200 less than my average of around 8020/WU. I have been running the beta since release, so I have a pretty good pool of data to look at for PPD.

You won't see a big gain from the memory OC, but if temps are reasonable than you might as well grab the extra points.


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Donkey1514*
> 
> still better than before


True. Now they are actually worth leaving those computers on and Folding. Before I didn't even bother wasting the electricity.


----------



## sub50hz

Quote:


> Originally Posted by *tictoc*
> 
> From what I have seen with my 7970 memory clocks do have a small effect on PPD.
> 
> I down-clocked my memory from 1375 to 1050, to reduce heat, and my points for that work unit were 200 less than my average of around 8020/WU. I have been running the beta since release, so I have a pretty good pool of data to look at for PPD.
> 
> You won't see a big gain from the memory OC, but if temps are reasonable than you might as well grab the extra points.


I'm sitting around 64C at 1250 core, and the fan noise is "bearable" -- I usually sleep with a fan or two on anyway, so it's not bothersome (around 60%). I'm not sure I want to push it into the 70% range as it does get *quite* noisy. I suppose I'll just make up those extra points by running SMP in addition to the core 17s. I might pick up a 7950 on sale at Microcenter for the hell of it, but my mobo layout eans I would have to ditch my Revodrive in favor of it, which I'm not exactly sure I want to do.


----------



## jomama22

Quote:


> Originally Posted by *sub50hz*
> 
> I'm sitting around 64C at 1250 core, and the fan noise is "bearable" -- I usually sleep with a fan or two on anyway, so it's not bothersome (around 60%). I'm not sure I want to push it into the 70% range as it does get *quite* noisy. I suppose I'll just make up those extra points by running SMP in addition to the core 17s. I might pick up a 7950 on sale at Microcenter for the hell of it, but my mobo layout eans I would have to ditch my Revodrive in favor of it, which I'm not exactly sure I want to do.


Memory may have some effect but I will have to test. When setting my 7970s to 1250/1800 I get ~60k PPD, a bit higher than your 52k. Are you leaving one core open on the cpu for the GPu?

I also get ~65k @ 1330/1800 So I know core is doing most of the work.


----------



## Caleal

Something I've been curious about.
Since the core_17 WUs use OpenCL, so use far more system resources than the normal Nvidia WUs, does the platform the video card is installed in make any folding performance difference?


----------



## sub50hz

Quote:


> Originally Posted by *jomama22*
> 
> Are you leaving one core open on the cpu for the GPu?


I haven't noticed any stray core activity, so I was prepared to simply run smp -4. I'm not sure smp -3 would even be worth it.


----------



## giganews35

Anybody else dropping bad WUs? That is 2 in the last week for me. I was stable for months at this overclock. Not sure if throughout the day as it got warmer my overclock got unstable or I just picked up bad units. I downclocked to 985Mhz and 1.137v just to be sure for now. Temps haven't passed 50-51C to my knowledge and usually stay around 47C.


----------



## cam51037

Quote:


> Originally Posted by *giganews35*
> 
> Anybody else dropping bad WUs? That is 2 in the last week for me. I was stable for months at this overclock. Not sure if throughout the day as it got warmer my overclock got unstable or I just picked up bad units. I downclocked to 985Mhz and 1.137v just to be sure for now. Temps haven't passed 50-51C to my knowledge and usually stay around 47C.


Bad units as in ones that give you warnings while folding, or bad PPD-wise?


----------



## giganews35

Quote:


> Originally Posted by *cam51037*
> 
> Bad units as in ones that give you warnings while folding, or bad PPD-wise?


WARNING:WU03:FS01:FahCore returned: BAD_WORK_UNIT (114 = 0x72)

Usually due to overclock.. but sometimes a bad unit sneaks through or so I've seen other people report.


----------



## Krusher33

Quote:


> Originally Posted by *giganews35*
> 
> Anybody else dropping bad WUs? That is 2 in the last week for me. I was stable for months at this overclock. Not sure if throughout the day as it got warmer my overclock got unstable or I just picked up bad units. I downclocked to 985Mhz and 1.137v just to be sure for now. Temps haven't passed 50-51C to my knowledge and usually stay around 47C.


Yeah I had that issue a couple of weeks ago. My usual overclock that I did with Core16 started failing back to back WU's. So I dropped it down some. Was a bit more successful. After awhile I decided to reverse my BIOS back to stock and use a much lower voltage OC so that I can have it ready for sale.


----------



## Caleal

Quote:


> Originally Posted by *giganews35*
> 
> WARNING:WU03:FS01:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
> 
> Usually due to overclock.. but sometimes a bad unit sneaks through or so I've seen other people report.


Bumping the voltage up solved that problem for me.


----------



## decali

Thought you'd all appreciate this piece of news







: http://folding.typepad.com/news/2013/03/sneak-peak-at-openmm-51-about-2x-increase-in-ppd-for-gpu-core-17.html


----------



## jomama22

Quote:


> Originally Posted by *decali*
> 
> Thought you'd all appreciate this piece of news
> 
> 
> 
> 
> 
> 
> 
> : http://folding.typepad.com/news/2013/03/sneak-peak-at-openmm-51-about-2x-increase-in-ppd-for-gpu-core-17.html


If I can get 130k+ ppd on my top 7970....

Mother of god.jpg


----------



## GarTheConquer

Quote:


> Originally Posted by *jomama22*
> 
> If I can get 130k+ ppd on my top 7970....
> 
> Mother of god.jpg


I agree. I just started folding a week and a half ago. This is crazy! I think it calls for...


----------



## Evil Genius Jr

I'm having some problems folding with my Radeon 5750 (probably worthless I know but I have it so why not...). Running Win 8 x64, Unmodified 13.1 drivers, V 7.2.9 Client. I have the options on the first page entered.

The Work units fail like so:
Quote:


> 12:43:47:WU01:FS01:0x17roject: 7662 (Run 7, Clone 19, Gen 52)
> 12:43:47:WU01:FS01:0x17:Unit: 0x0000004bff3d483551391624287aabf9
> 12:43:47:WU01:FS01:0x17:CPU: 0x00000000000000000000000000000000
> 12:43:47:WU01:FS01:0x17:Machine: 1
> 12:43:47:WU01:FS01:0x17:Reading tar file state.xml
> 12:43:47:WU01:FS01:0x17:Reading tar file system.xml
> 12:43:47:WU01:FS01:0x17:Reading tar file integrator.xml
> 12:43:47:WU01:FS01:0x17:Reading tar file core.xml
> 12:43:47:WU01:FS01:0x17igital signatures verified
> 12:43:49:WU01:FS01:0x17:ERROR:exception: Bad platformId size.
> 12:43:49:WU01:FS01:0x17:Saving result file logfile_01.txt
> 12:43:49:WU01:FS01:0x17:Saving result file log.txt
> 12:43:49:WU01:FS01:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
> 12:43:49:WARNING:WU01:FS01:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
> 12:43:49:WU01:FS01:Sending unit results: id:01 state:SEND error:FAULTY project:7662 run:7 clone:19 gen:52 core:0x17 unit:0x0000004bff3d483551391624287aabf9
> 12:43:49:WU01:FS01:Uploading 2.20KiB to 171.67.108.149
> 12:43:49:WU01:FS01:Connecting to 171.67.108.149:8080
> 12:43:49:WU01:FS01:Upload complete
> 12:43:49:WU01:FS01:Server responded WORK_ACK (400)
> 12:43:49:WU01:FS01:Cleaning up


----------



## Krusher33

Voltage, it shall needz MOAR!


----------



## joker927

I am folding on a bunch of GPUs. All Nvidia. GTX 460-670. Should I switch to the 17 beta core?


----------



## cam51037

Quote:


> Originally Posted by *joker927*
> 
> I am folding on a bunch of GPUs. All Nvidia. GTX 460-670. Should I switch to the 17 beta core?


Can you list them? For Kepler cards, the new core drops PPD, but I think on Fermi cards, the new core is better.


----------



## ZDngrfld

My GTX 560 HATES Core 17. Not sure why.


----------



## scubadiver59

Quote:


> Originally Posted by *ZDngrfld*
> 
> My GTX 560 HATES Core 17. Not sure why.


Really? I may have to go home and fire one up to see...I don't think I ran my cards on the beta when I first build all my rigs..and most of my cards are 560's or 580s (I do have two 7950 Lightnings).

Sounds like a test tonight alongside my 4P...fire up one 560Ti TF-II with a beta and one w/o


----------



## ZDngrfld

Quote:


> Originally Posted by *scubadiver59*
> 
> Really? I may have to go home and fire one up to see...I don't think I ran my cards on the beta when I first build all my rigs..and most of my cards are 560's or 580s (I do have two 7950 Lightnings).
> 
> Sounds like a test tonight alongside my 4P...fire up one 560Ti TF-II with a beta and one w/o


This is just a 560, not a TI. Maybe the TIs like them, not sure.


----------



## scubadiver59

Quote:


> Originally Posted by *ZDngrfld*
> 
> This is just a 560, not a TI. Maybe the TIs like them, not sure.


Well, I've got several of those to try as well...


----------



## joker927

Quote:


> Originally Posted by *cam51037*
> 
> Can you list them? For Kepler cards, the new core drops PPD, but I think on Fermi cards, the new core is better.


GTX 460
GTX 470
GTX 560
GTX 560Ti 448
GTX 570
GTX 670

You think all but the 670 will get a ppd increase?
In general are people still seeing ~1-2% CPU usage on Fermi with core17? I ask because I fold using SMP in a Linux VM on all these systems.


----------



## arvidab

My 560ti was slower on _17 than normal, ~21k vs. 23-27k. I'd guess you'll see an increase on your 570 and possible the 448.

Nvidia uses a core on _17.


----------



## Caleal

Quote:


> Originally Posted by *joker927*
> 
> You think all but the 670 will get a ppd increase?
> In general are people still seeing ~1-2% CPU usage on Fermi with core17? I ask because I fold using SMP in a Linux VM on all these systems.


The 460 and 560 may not finish them fast enough for good quick return bonuses.
Because Core_17 uses OpenCL, each card will use most of an entire CPU core, and a good chunk of system memory, so your CPU folding will suffer a significant loss.


----------



## runs2far

Quote:


> Originally Posted by *Caleal*
> 
> The 460 and 560 may not finish them fast enough for good quick return bonuses.
> Because Core_17 uses OpenCL, each card will use most of an entire CPU core, and a good chunk of system memory, so your CPU folding will suffer a significant loss.


the OpenCL load you describe must be NVidia only, or you are not using the suggested beta drivers, I'm running [email protected] on an AMD card and I am seeing close to 0 load on the CPU.


----------



## arvidab

Quote:


> Originally Posted by *runs2far*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Caleal*
> 
> The 460 and 560 may not finish them fast enough for good quick return bonuses.
> Because Core_17 uses OpenCL, each card will use most of an entire CPU core, and a good chunk of system memory, so your CPU folding will suffer a significant loss.
> 
> 
> 
> the OpenCL load you describe must be NVidia only, or you are not using the suggested beta drivers, I'm running [email protected] on an AMD card and I am seeing close to 0 load on the CPU.
Click to expand...

Yep, with Nvidia's OpenCL driver the _17 uses a core. AMD uses only a very small amount of CPU resources.


----------



## giganews35

Quote:


> Originally Posted by *arvidab*
> 
> Yep, with Nvidia's OpenCL driver the _17 uses a core. AMD uses only a very small amount of CPU resources.


It's like the tables turned.


----------



## Krusher33

Quote:


> Originally Posted by *giganews35*
> 
> Quote:
> 
> 
> 
> Originally Posted by *arvidab*
> 
> Yep, with Nvidia's OpenCL driver the _17 uses a core. AMD uses only a very small amount of CPU resources.
> 
> 
> 
> It's like the tables turned.
Click to expand...

The tables has turned with a slight twist. OpenCL is open sourced whereas CUDA is not if I'm not mistaken.


----------



## Evil Penguin

Quote:


> Originally Posted by *Krusher33*
> 
> The tables has turned with a slight twist. OpenCL is open sourced whereas CUDA is not if I'm not mistaken.


OpenCL is an open standard.


----------



## PR-Imagery

I'd rather it still use cuda for cuda cable cards since it'll give the best performance.

I mean, they were already using cuda to great effect, why "fix" what's not broken?


----------



## jomama22

Quote:


> Originally Posted by *PR-Imagery*
> 
> I'd rather it still use cuda for cuda cable cards since it'll give the best performance.
> 
> I mean, they were already using cuda to great effect, why "fix" what's not broken?


Because not everyone has cuda, meaning they are missing out on a lot of performance from amd users. 7970s on core17 are now faster than 580/680s on cuda developed cores. I don't see how they did anything wrong or crazy.


----------



## Gungnir

^Because with OpenCL, they can potentially support AMD, nVidia, Intel, and several other manufacturers well, with a single, simpler core? It's not FaH's fault that nVidia's OpenCL implementation has issues.

Hopefully, the recent increase in OpenCL support will lead to better SDKs, toolkits, and documentation for it, and the eventual abandonment of CUDA altogether.


----------



## scubadiver59

Quote:


> Originally Posted by *Gungnir*
> 
> ^Because with OpenCL, they can potentially support AMD, nVidia, Intel, and several other manufacturers well, with a single, simpler core? It's not FaH's fault that nVidia's OpenCL implementation has issues.
> 
> Hopefully, the recent increase in OpenCL support will lead to better SDKs, toolkits, and documentation for it, and the *eventual abandonment of CUDA altogether*.


When pigs fly and it snows in hell?


----------



## Gungnir

Quote:


> Originally Posted by *scubadiver59*
> 
> When pigs fly and it snows in hell?


Well, we can dream.


----------



## joker927

Well the 1/2 loss of SMP throughput will be a real downer on all my NVIIDA folding rigs. Perhaps Nvidia will come up with a better OpenCL implementation. Hopefully they know that folding ppd can sway consumers to one GPU maker or another.

With core17 using OpenCL, and Intel Core GPUs fully capable of good openCL execution, I'm curious about the ability to fold on Intel HD graphics as well as dedicated GPUs.


----------



## decali

Quote:


> Originally Posted by *PR-Imagery*
> 
> I'd rather it still use cuda for cuda cable cards since it'll give the best performance.
> 
> I mean, they were already using cuda to great effect, why "fix" what's not broken?


Like others above have said, it's simpler for the team. Here's a quote from the [email protected] Blog post about Core17:
Quote:


> A single unified core now runs both NVIDIA and AMD cards
> Before we had two development branches for NVIDIA and AMD cards. It was a difficult and cumbersome task to debug and maintain. We couldn't easily mix runs and gens produced by different GPU types. Now, using OpenCL, a single core supports not only AMD and NVIDIA, but theoretically any OpenCL-capable device.


----------



## jomama22

Quote:


> Originally Posted by *scubadiver59*
> 
> When pigs fly and it snows in hell?


Those that actually use cuda (universities, research centers, etc) don't exactly go out of their way for it. Nvidia has done a smart thing in "giving" cuda enabled products to these very large and well known establishments. It only makes sense to use cuda since the free stuff you just got uses it much better than opengl.

Its actually kind of interesting from that respect. Makes you wonder if nvidia handicaps opengl on purpose.


----------



## KingT

On a HD7950 OC @ 1050/1750MHz I get ~ 45k PPD, usually I get 7662 project.

This is awesome for AMD , also these cards have pretty decent power usage, and on my DC2 model GPU goes up 54C max in very hot room.

When I remember my GTX480 powerhog and its temps @ 90C area, turbine noise









CHEERS..


----------



## ryan w

Spoiler: Core 17 only w/ 2x 6950's 950/1400, PPD: 27,818









Spoiler: Core 17 plus 8150, PPD: 34,000, 550w power draw from wall ( waiting to run beta on cpu)









Spoiler: CPUZ







Getting system ready for Chimpin best I have seen for this system yet! just noticed i may have been running in crossfire, may make a difference in GPU PPD have to retest


----------



## martinhal

Crossfire dose lower your ppd , I tested that today .


----------



## InsideJob

Is there a way to get in on this or do I have to wait for it to become public?


----------



## Krusher33

Just follow the directions in OP.


----------



## InsideJob

Ahh yes, thanks








Finishing my current WU in the next hour or 2 then will see how it goes







If all is well Shizzle Tang will be happy, and I will be signing up to the chimp challenge asap


----------



## Krusher33

And yikes... I really need to get a 7970 for dark predators.


----------



## martinhal

Quote:


> Originally Posted by *Krusher33*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And yikes... I really need to get a 7970 for dark predators.


Im keen point me the right direction ,. I have a 7970.


----------



## InsideJob

~42k PPD...

I am a happy folder








Also thanks to this lovely situation I broke 1 million points last night


----------



## jesusboots

Quote:


> Originally Posted by *martinhal*
> 
> Im keen point me the right direction ,. I have a 7970.


For what price though?


----------



## martinhal

Quote:


> Originally Posted by *jesusboots*
> 
> For what price though?


.. not for sale , to fold for a team


----------



## jesusboots

Quote:


> Originally Posted by *martinhal*
> 
> .. not for sale , to fold for a team


Even better.


----------



## Krusher33

Quote:


> Originally Posted by *martinhal*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And yikes... I really need to get a 7970 for dark predators.
> 
> 
> 
> Im keen point me the right direction ,. I have a 7970.
Click to expand...

TC sign up is here:http://www.overclock.net/t/775167/official-ocn-team-competition-sign-up-sheet/0_50

I don't doubt we'll have a lot of AMD drop outs in May. They're going to get dismayed by 7970 and 7950 owners.


----------



## jesusboots

Quote:


> Originally Posted by *Krusher33*
> 
> I don't doubt we'll have a lot of AMD drop outs in May. They're going to get dismayed by 7970 and 7950 owners.


I just dont see that happening. These are beta units. When they become units, qrb that is for actual units. They might bump ppd 2-5k on the units we actually start folding. At least its whats happened the last two times we did beta units.


----------



## Krusher33

Quote:


> Originally Posted by *jesusboots*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> I don't doubt we'll have a lot of AMD drop outs in May. They're going to get dismayed by 7970 and 7950 owners.
> 
> 
> 
> I just dont see that happening. These are beta units. When they become units, qrb that is for actual units. They might bump ppd 2-5k on the units we actually start folding. At least its whats happened the last two times we did beta units.
Click to expand...

Yeah but the 7970 will still be king by a long shot. The only way the low budget guys will stay if there's only less than 10k difference between all the cards or something. But with it being 40k... yikes.


----------



## jesusboots

Valid point


----------



## Bal3Wolf

Quote:


> Originally Posted by *jesusboots*
> 
> I just dont see that happening. These are beta units. When they become units, qrb that is for actual units. They might bump ppd 2-5k on the units we actually start folding. At least its whats happened the last two times we did beta units.


But didnt stanford say when core 17 becomes offical points are going to go up not down.


----------



## InsideJob

http://www.overclock.net/t/1377824/official-chimp-challenge-2013/400_100#post_19701937

I hope for this reason people dont leave just over points... Folding isnt about who can pull the most points, it's about trying/helping to make a difference.


----------



## Krusher33

Quote:


> Originally Posted by *InsideJob*
> 
> http://www.overclock.net/t/1377824/official-chimp-challenge-2013/400_100#post_19701937
> 
> I hope for this reason people dont leave just over points... Folding isnt about who can pull the most points, it's about trying/helping to make a difference.


Let me rephrase: people will leave TC but generally will continue folding.


----------



## labnjab

Quote:


> Originally Posted by *Krusher33*
> 
> Let me rephrase: people will leave TC but generally will continue folding.


If I were to leave tc (not that I'm even thinking about it, because I'm not) I wouldn't stop folding. Instead, I would throw windows and a couple of 580's or 7970's in my tc rig and use it to make even more points


----------



## Krusher33

And a lot of people in AMD cat tends to switch over to CPU folding whatever nets them more points after quitting TC.


----------



## joker927

Remind me what Tc is.


----------



## Krusher33

TC = Team Competition. This guide will answer a lot of questions on what it is: http://www.overclock.net/t/1270919/team-competition-manual/0_50


----------



## Baskt_Case

Alright, so I just added the beta flag to my setup.

I put the flag in "Extra Slot Options" for the GPU slot. Is that correct?

Do I need to worry about anything under the "Expert" Tab (Extra Client Options, Extra Core Options)?

Right now I am 23% complete on a Core 16 WU. I'm guessing it will download a Core 17 WU when this one completes? I didn't know if adding the flag was going to cause a dump and restart, or what.


----------



## tictoc

You will get the beta units after the current WU finishes up.









Post a screen shot of your configuration, and we can check it out to make sure it is set up correctly.


----------



## aas88keyz

Well I have decided to drop the 17 on my GTX 560 Ti 448. I started it right after the gun and thought I was doing really well and even was ok with designating a cpu core. This morning after I finished my wu's I decided to try removing the beta flag and give back that core to vm fah v6 for smp 8 and fold windows native fah v7 for my gpu. I went from 38 kppd (10 kppd for cpu / 28 kppd for gpu) to around 60 kppd (28 kppd for smp / 32 kppd for gpu). Might try beta again if Nvidia ever planned an opencl update. Don't think that will happen very soon


----------



## Majorhi

Holy cow! This is crazy PPD for me!


----------



## Baskt_Case

Quote:


> Originally Posted by *tictoc*
> 
> You will get the beta units after the current WU finishes up.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Post a screen shot of your configuration, and we can check it out to make sure it is set up correctly.


OK, here is the Main Screen with GPU WU selected, GPU Slot Configuration, and the Expert Settings Tab.


----------



## Caleal

Quote:


> Originally Posted by *Majorhi*
> 
> Holy cow! This is crazy PPD for me!


Not bad, but my 15.3 MILLION ppd GTX580 has it beat!


----------



## joker927

Well my new *used* gtx 560 ti 448 just died




























after only having it folding for a week. Any suggestions for a replacement? I already have a GTX 460 and a C2Q folding @ 4.0ghz in Linux as well in the same box. This means I have to stick with nvidia unless there is a way around having 2 diff cards. The new core17 could mean 1 AMD card outperforming 2 GT4/5 series and thus my asking.


----------



## cam51037

Quote:


> Originally Posted by *joker927*
> 
> Well my new *used* gtx 560 ti 448 just died
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> after only having it folding for a week. Any suggestions for a replacement? I already have a GTX 460 and a C2Q folding @ 4.0ghz in Linux as well in the same box. This means I have to stick with nvidia unless there is a way around having 2 diff cards. The new core17 could mean 1 AMD card outperforming 2 GT4/5 series and thus my asking.


How did it die? That really sucks.

I'd try selling the GTX 460 and save up for a nice AMD card, like a 7970, or maybe a 7950 if the budget allows.


----------



## arvidab

Quote:


> Originally Posted by *joker927*
> 
> Well my new *used* gtx 560 ti 448 just died
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> after only having it folding for a week. Any suggestions for a replacement? I already have a GTX 460 and a C2Q folding @ 4.0ghz in Linux as well in the same box. This means I have to stick with nvidia unless there is a way around having 2 diff cards. The new core17 could mean 1 AMD card outperforming 2 GT4/5 series and thus my asking.


It's been some time since I dabbled with both Nvidia and ATI in the same rig, it was a hassle to get both card recognized and drivers properly installed for them, OC's and folding. But I just saw this post, which confirms that's it's fiddely, but still doable:
Quote:


> Originally Posted by *IvantheDugtrio*
> 
> For anyone considering in throwing in whatever GPUs they have in their rig, even if that means mixing AMD and nvidia cards in the same system, I can say it can work though configuring it can be a hassle. Right now I'm doing just that with a 7870 and GTX 660 running in tandem using 13.3 beta and 314.22 drivers.
> For setting it up I recommend installing the AMD/ATI card first, and then adding the nvidia card and installing those drivers. This way you avoid black screens during bootup and whatnot.
> Then when you set up the cards for folding, you will have to change the OpenCL and CUDA slots if the folding slots conflict with what is reported in GPU-Z.
> Once you do that and add the beta flags you should be set!


----------



## Caleal

A note to Fermi folders.

With the ForceWare 266.58 drivers, yes the same ones we were using 2 years ago, the core_17.exe CPU usage will be 1%, instead of the 12% we have been getting with the 3xxx drivers, AND your PPD will be slightly higher on p7662 WUs.

Obviously the 266.58 drivers are less than ideal if you also use the system for gaming, but for those of us that don't game, it's a nice boost to both SMP and GPU ppd.

The 266.58 drivers also don't cap GTX580's at 1 Ghz like the 3xx drivers do.









Also, be sure to finish the current WU before switching drivers, or you may end up with crazy stuff like my 15.3 million PPD picture in my previous post, and a failed WU.


----------



## arvidab

Nice find, doesn't really surprise me though...
Always recommended the 266.58 (or 266.66) as Fermi folding drivers.


----------



## Caleal

Quote:


> Originally Posted by *arvidab*
> 
> Nice find, doesn't really surprise me though...
> Always recommended the 266.58 (or 266.66) as Fermi folding drivers.


I haven't tried the 266.66 drivers, but they are the oldest drivers that some Fermi cards will work with, so hopefully they work the same.


----------



## arvidab

Yea, my 560Ti needs them, which isn't fast enough to take advantage of the QRB on 7662 to make it better than the old _15 WUs.


----------



## scubadiver59

Quote:


> Originally Posted by *Caleal*
> 
> A note to Fermi folders.
> 
> With the ForceWare 266.58 drivers, yes the same ones we were using 2 years ago, the core_17.exe CPU usage will be 1%, instead of the 12% we have been getting with the 3xxx drivers, AND your PPD will be slightly higher on p7662 WUs.
> 
> Obviously the 266.58 drivers are less than ideal if you also use the system for gaming, but for those of us that don't game, it's a nice boost to both SMP and GPU ppd.
> 
> The 266.58 drivers also don't cap GTX580's at 1 Ghz like the 3xx drivers do.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Also, be sure to finish the current WU before switching drivers, or you may end up with crazy stuff like my 15.3 million PPD picture in my previous post, and a failed WU.


I went to Guru3D to get the 64bit version
http://downloads.guru3d.com/downloadget.php?id=2664&file=1&evp=42a4e166f6f13c6b25ea5b458174289d

but here is Nvidia's location as well
http://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/Windows/266.58/266.58_desktop_win7_winvista_64bit_english_whql.exe&lang=us&type=GeForce

EDIT:
Now I have a lot of machines to update...thanks a lot! LoL


----------



## Caleal

Quote:


> Originally Posted by *scubadiver59*
> 
> I went to Guru3D to get the 64bit version
> http://downloads.guru3d.com/downloadget.php?id=2664&file=1&evp=42a4e166f6f13c6b25ea5b458174289d
> 
> but here is Nvidia's location as well
> http://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/Windows/266.58/266.58_desktop_win7_winvista_64bit_english_whql.exe&lang=us&type=GeForce
> 
> EDIT:
> Now I have a lot of machines to update...thanks a lot! LoL


You need the 266.66 drivers for your 560 Ti cards, they are not supported by the 266.58 drivers.


----------



## joker927

Quote:


> Originally Posted by *cam51037*
> 
> How did it die? That really sucks.
> 
> I'd try selling the GTX 460 and save up for a nice AMD card, like a 7970, or maybe a 7950 if the budget allows.


It simply... died. I attached an aftermarket cooler, I overclocked it, temps dropped and when I came back the next day the system was off and wouldn't recognize the card at all, even in device manager. It's a brick as far as I can tell. So sad to. I was hoping for a new PPD record for the chimp challenge.

My current gaming rig has a 7950 so no way I would buy one just to put into a folding box. Maybe a used 470 or 570 but so much heat and AMD is getting massive ppd. Maybe a 7850 but I havent seen the ppd on these with core17 [edit: people are saying 24k with core17. lame. my 460 currently gets that with core 15]


----------



## runs2far

Quote:


> Originally Posted by *joker927*
> 
> It simply... died. I attached an aftermarket cooler, I overclocked it, temps dropped and when I came back the next day the system was off and wouldn't recognize the card at all, even in device manager. It's a brick as far as I can tell. So sad to. I was hoping for a new PPD record for the chimp challenge.
> 
> My current gaming rig has a 7950 so no way I would buy one just to put into a folding box. Maybe a used 470 or 570 but so much heat and AMD is getting massive ppd. Maybe a 7850 but I havent seen the ppd on these with core17 [edit: people are saying 24k with core17. lame. my 460 currently gets that with core 15]


Sounds like burned VRM caused by reduction in airflow from the aftermarket cooler.

I have a HD7850 and I'm getting around 15K at stock, not an awesome number, but I think the HD7850 draws far less than the 150 Watts a GTX460 is rated for.


----------



## scubadiver59

Quote:


> Originally Posted by *Caleal*
> 
> You need the 266.66 drivers for your 560 Ti cards, they are not supported by the 266.58 drivers.


Gotcha!









EDIT:
But this sucks where I have a mix of both cards in the same machine! Looks like I will have to run the shell game and move that one 580 into one of my 3570 boxes and move the 560Ti in with the other 560Ti. Sigh...


----------



## Baskt_Case

I'm running 13.1 WHQL drivers on a HD6450, and finally got a Core 17 WU. GPU usage is 100%.

But my CPU usage has dropped to ~50% on all 3 cores!

Please help.


----------



## Krusher33

Your CPU usage is supposed to drop. Unless you're concerned because you're folding SMP too?


----------



## Baskt_Case

Quote:


> Originally Posted by *Krusher33*
> 
> Your CPU usage is supposed to drop. Unless you're concerned because you're folding SMP too?


Yes, I am SMP folding as well.

Prior to this with a Core 16 WU, 2 out of 3 CPU cores were at 100% with the 3rd core of course lower because of GPU folding.


----------



## Krusher33

Are you folding on both GPU and CPU in 1 client? Or are you like me folding CPU in a VM and GPU in v7 client?


----------



## Baskt_Case

Yes, folding on both CPU and GPU with V7 in Windows.

No VM's or Linux.

To clarify, 1 client, 2 folding cores, all with V7 under Windows.


----------



## Krusher33

Ok, and SMP really is 2? Because it sounds like it might be folding a unicore now.


----------



## aas88keyz

Quote:


> Originally Posted by *Caleal*
> 
> Quote:
> 
> 
> 
> Originally Posted by *scubadiver59*
> 
> I went to Guru3D to get the 64bit version
> http://downloads.guru3d.com/downloadget.php?id=2664&file=1&evp=42a4e166f6f13c6b25ea5b458174289d
> 
> but here is Nvidia's location as well
> http://www.nvidia.com/content/DriverDownload-March2009/confirmation.php?url=/Windows/266.58/266.58_desktop_win7_winvista_64bit_english_whql.exe&lang=us&type=GeForce
> 
> EDIT:
> Now I have a lot of machines to update...thanks a lot! LoL
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> You need the 266.66 drivers for your 560 Ti cards, they are not supported by the 266.58 drivers.
Click to expand...

I cannot find an older driver that works on my GTX 560 Ti 448 card. What would be the next driver after 266.66 that would work the same or am I outta luck?


----------



## Caleal

Quote:


> Originally Posted by *scubadiver59*
> 
> But this sucks where I have a mix of both cards in the same machine! Looks like I will have to run the shell game and move that one 580 into one of my 3570 boxes and move the 560Ti in with the other 560Ti. Sigh...


The 266.66 drivers will work with the 580 and the 560Ti, unless the 560 Ti is the 448 version.

Quote:


> Originally Posted by *aas88keyz*
> 
> I cannot find an older driver that works on my GTX 560 Ti 448 card. What would be the next driver after 266.66 that would work the same or am I outta luck?


I think the 448 may need 290 series drivers, so you may be SOL.


----------



## gamer11200

Using Catalyst 13.3 beta on a 7870 @ 1100core/1200mem on sig rig with a free core. Currently done close to 3% for the work unit and it's reporting estimated PPD of 24,262.


----------



## scubadiver59

Quote:


> Originally Posted by *Caleal*
> 
> The 266.66 drivers will work with the 580 and the 560Ti, unless the 560 Ti is the 448 version.
> I think the 448 may need 290 series drivers, so you may be SOL.


We can still download the 290.36 and 290.53 drivers...but I don't know if there are any others in that family


----------



## joker927

Quote:


> Originally Posted by *runs2far*
> 
> Sounds like burned VRM caused by reduction in airflow from the aftermarket cooler.


I hope I'm not derailing the thread but I bought a cooler that came with heatsinks you glue to the vrms. Perhaps I did it incorrectly. For shame.


----------



## tictoc

Quote:


> Originally Posted by *gamer11200*
> 
> Using Catalyst 13.3 beta on a 7870 @ 1100core/1200mem on sig rig with a free core. Currently done close to 3% for the work unit and it's reporting estimated PPD of 24,262.


Your PPD of 24K is right at what you should be seeing with the 7870. Breaking in the new card with [email protected]









You will probably be able to bump your OC higher once that card is back on BOINC. These WUs are very picky about stability.


----------



## aas88keyz

Quote:


> Originally Posted by *scubadiver59*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Caleal*
> 
> The 266.66 drivers will work with the 580 and the 560Ti, unless the 560 Ti is the 448 version.
> I think the 448 may need 290 series drivers, so you may be SOL.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> We can still download the 290.36 and 290.53 drivers...but I don't know if there are any others in that family
Click to expand...

Sorry guys. Did the work on attempting to install 266.66 and up and I was not successful until 290.36. Which did install ok but still requires a cpu core to fold. too bad for us 448's


----------



## Baskt_Case

Well, my lowly 6450 is currently running *1 hour per Frame* on a Core 17 WU. FAHControl reports 368PPD at this rate.

This did not work out as well as I had hoped. I think I'm going to dump the WU, and stop GPU folding altogether and just go full steam on CPU folding. I really dont care to to do all the modded/beta drivers for my card. Maybe when I upgrade PSU I can get a decent folding card.

EDIT: Removing the GPU slot and thus dumping the GPU core and WU resulted in immediate 100% usage on all 3 cores of my CPU.


----------



## runs2far

Quote:


> Originally Posted by *Baskt_Case*
> 
> Well, my lowly 6450 is currently running *1 hour per Frame* on a Core 17 WU. FAHControl reports 368PPD at this rate.
> 
> This did not work out as well as I had hoped. I think I'm going to dump the WU, and stop GPU folding altogether and just go full steam on CPU folding. I really dont care to to do all the modded/beta drivers for my card. Maybe when I upgrade PSU I can get a decent folding card.
> 
> EDIT: Removing the GPU slot and thus dumping the GPU core and WU resulted in immediate 100% usage on all 3 cores of my CPU.


The 6450 is a low end VLIW5 160 stream processor part with pitiful performance, when compared to just about any HD7xxx part, getting an almost hopeless TPF on that card should not be a surprise.
Quote:


> Originally Posted by *joker927*
> 
> I hope I'm not derailing the thread but I bought a cooler that came with heatsinks you glue to the vrms. Perhaps I did it incorrectly. For shame.


I'm just guessing, could be an unlucky coincidence.


----------



## scubadiver59

Quote:


> Originally Posted by *aas88keyz*
> 
> Sorry guys. Did the work on attempting to install 266.66 and up and I was not successful until 290.36. Which did install ok but still requires a cpu core to fold. too bad for us 448's


Luckily...I think(?)...my 560Ti's aren't the 448 version, so the 266.66 drivers worked (after a single BSoD)...but I can't say as I noticed any great performance difference (both are still stock):

GPU0 / P8070 / 24.4k PPD / 3874 credit
GPU1 / P7662 / 18.7k PPD / 5738 credit

At least my 4P is covering my butt: P8102 / 579.2k PPD / 388.1k credit


----------



## arvidab

Just run regular units on your 560ti, it's too slow on the betas.


----------



## Doc_Gonzo

I'm running a 7850 @ 1050 core / 1250 Memory and getting an estimated 17510 PPD
Also running a 7950 @ 1000 core / 1250 Memory and getting 35600 PPD
I'm using 13.2 drivers that were modded for BOINC.
Does my PPD look to be about right?

Edit to add, 7850 = 96% usage. 7950 = 95% usage


----------



## Caleal

Quote:


> Originally Posted by *scubadiver59*
> 
> Luckily...I think(?)...my 560Ti's aren't the 448 version, so the 266.66 drivers worked (after a single BSoD)...but I can't say as I noticed any great performance difference (both are still stock):


I only got a small performance with on my 580 and 470, and both are massively over volted and overclocked.
The primary benefit is that Core_17.exe CPU usage dropped to around 1% average, instead of using an entire CPU core.
Both machines fold SMP too, so the lower CPU usage from the GPU folding is a nice boost to SMP folding.


----------



## ZealotKi11er

Do i have to do anything special for get such high PPD? I am only getting 6.4K and 8.4K with my 7970s @ 1125MHz.


----------



## Krusher33

Did you follow the steps in OP?


----------



## ZealotKi11er

Quote:


> Originally Posted by *Krusher33*
> 
> Did you follow the steps in OP?


No steps to follow. It says you dont have to tag it if you are running 7.3.6


----------



## Krusher33

Hmmm... I thought it was there.

You want to do the client-type / beta portion of this screenshot and then wait till your current core16 to finish before it picks up a core 17.



And then you should be hitting 45-55k if left alone.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Krusher33*
> 
> Hmmm... I thought it was there.
> 
> You want to do the client-type / beta portion of this screenshot and then wait till your current core16 to finish before it picks up a core 17.
> 
> 
> 
> And then you should be hitting 45-55k if left alone.


Do i have to add that for each GPU? Also for Vendor do i keep it as nvidia or change it to ATi?


----------



## Krusher33

Back up. With client 7.3.6 you don't have to do the vendor part, just the beta one. And yeah, you'll need to do it for each GPU.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Krusher33*
> 
> Back up. With client 7.3.6 you don't have to do the vendor part, just the beta one. And yeah, you'll need to do it for each GPU.


Ok thanks. Do i should see the beta kick in once i finish the unit i am doing?


----------



## Krusher33

That's right. And if you're on 13.3 drivers, I think you'll see >98% GPU usage as well with <5% CPU usage.


----------



## ZealotKi11er

I am using 13.3 but i dont think i am usign the latest. 91/92% GPU usage and 33% CPU usage. I think i will get the latest Betas.


----------



## Krusher33

Quote:


> Originally Posted by *ZealotKi11er*
> 
> 91/92% GPU usage and 33% CPU usage.


That's how it is with core16's. Core 17's brought OpenCL and made improvements. So wait till you get a Core17 before you decide you need to change drivers.


----------



## ZealotKi11er

One problem. I noticed the second GPU is not getting constant GPU usage when PC is in idle mode. Is there anything i can do for that?


----------



## Krusher33

Is it doing the 92% for a bit, dropping to some other percentage for a bit, then back up to 92%? Because that's also normal with Core16's. A screenshot will help us understand better.


----------



## ZealotKi11er

Quote:


> Originally Posted by *Krusher33*
> 
> Is it doing the 92% for a bit, dropping to some other percentage for a bit, then back up to 92%? Because that's also normal with Core16's. A screenshot will help us understand better.


----------



## Krusher33

Yeah that's not how either one of them should be on Core16. I'm clueless as to what it could be other than drivers. But like I said we should wait till Core17 kicks in and see what happens.


----------



## ZealotKi11er

How will i know i am getting Core17? Also i have my HD 5850 @ 950MHz folding and i am getting 98% GPU usage ~ 9.5K PPD. Is this getting Core17?


----------



## Krusher33

2 ways: in task manager it'll say something like FAH16 in the processes. Or in the client itself it'll say so in the details of the project. I can't remember the field's name. But looking at the details it's easily noticeable.


----------



## ZealotKi11er

For some reason in my second system with HD 5850 my Q6600 is not doing work even though its 100% CPU usage. Its stuck @ 0.07%.


----------



## anubis44

OK, I'm wanting to contribute to the overclock.net [email protected] team, and I've got the 7.3.6 client installed. I'm running on an [email protected] (stocks volts) and a Radeon 7950 (flashed to 1050MHz core/1400MHz memory). The entire system is solid stable -- been using it daily since I built it in November of last year. I'm currently using Catalyst 13.3 beta 3 drivers.

I can't seem to get the GPU folding. I read just a bit earlier in this thread that for the 7.3.6 client, for the GPU slot, I need only put in the 'client-type beta' stuff, so I did. Nevertheless, only the CPU slot is running. The GPU slot, which does identify my Tahiti Pro card properly, just sits there as 'Paused: waiting for idle'. I've right-clicked and told it to fold several times, but nothing seems to make it start up.



Should the GPU slot be starting up and running right away or do I need to wait? Any ideas? I really want to see this GPU fold - I've seen may references to the kick-ass performance on the Tahiti Pro core with this, and I'd like to put another Radeon GPU to work for the cause.


----------



## mm67

Quote:


> Originally Posted by *anubis44*
> 
> OK, I'm wanting to contribute to the overclock.net [email protected] team, and I've got the 7.3.6 client installed. I'm running on an [email protected] (stocks volts) and a Radeon 7950 (flashed to 1050MHz core/1400MHz memory). The entire system is solid stable -- been using it daily since I built it in November of last year. I'm currently using Catalyst 13.3 beta 3 drivers.
> 
> I can't seem to get the GPU folding. I read just a bit earlier in this thread that for the 7.3.6 client, for the GPU slot, I need only put in the 'client-type beta' stuff, so I did. Nevertheless, only the CPU slot is running. The GPU slot, which does identify my Tahiti Pro card properly, just sits there as 'Paused: waiting for idle'. I've right-clicked and told it to fold several times, but nothing seems to make it start up.
> 
> 
> 
> Should it be starting up right away or do I need to wait? Any ideas?


Moving the slider from medium to full started my 7950's


----------



## anubis44

Quote:


> Originally Posted by *mm67*
> 
> Moving the slider from medium to full started my 7950's


Well I'll be damned! Thanks! That did the trick instantly!

I just wish there would be some mention of this somewhere. OK, now we'll see how many PPD my rig can crank out. Thank you so much.


----------



## Baskt_Case

Quote:


> Originally Posted by *mm67*
> 
> Moving the slider from medium to full started my 7950's


+1

Move that slider to "FULL"

Should start folding right away.


----------



## anubis44

Just one more question. I've got an 8 core Vishera FX-8350 CPU. I can spare up to 5 of the 8 cores for folding. Is there a problem with odd numbers of cores being made available, or is this just fine?

Thanks in advance for any help.


----------



## anubis44

Oh, and just FYI, my PPD has suddenly skyrocketed from the measly ~9000 PPD with just the CPU cores, to 55513 PDD with the Radeon [email protected] core/1400MHz memory (flashed it using the HIS 7970 bios from techpowerup.com):



My 7950 is a Gigabyte Windforce 3, and she's pumping out the PPD while maintaining a respectable 64 degrees under 98% load. Not too bad - I might try overclocking the GPU a little more, perhaps with another bios. So far, though, the HIS 7970 bios seems to be a very good match for a casual GPU overclocker like me. Don't even need any overclocking software, since it's just a bios flash.


----------



## Baskt_Case

Should be fine, as long as you dont change from 8 to 5 in the middle of a work unit. Check out my folding log when I went from 2 to 3 cores in the middle of a WU...

_03:19:42:WU00:FS01:FahCore returned: INTERRUPTED (102 = 0x66)
03:19:43:WU00:FS01:Starting
*03:19:43:WARNING:WU00:FS01:Changed SMP threads from 2 to 3 this can cause some work units to fail*
03:19:43:WU00:FS01:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" "C:/Program Files (x86)/FAHData/cores/www.stanford.edu/~pande/Win32/AMD64/Core_a4.fah/FahCore_a4.exe" -dir 00 -suffix 01 -version 703 -lifeline 2320 -checkpoint 15 -np 3
03:19:43:WU00:FS01:Started FahCore on PID 1336_


----------



## Baskt_Case

Quote:


> Originally Posted by *anubis44*
> 
> Oh, and just FYI, my PPD has suddenly skyrocketed from the measly ~9000 PPD with just the CPU cores, to 55513 PDD with the Radeon [email protected] core/1400MHz memory (flashed it using the HIS 7970 bios from techpowerup.com):.


Totally awesome! Have you got your folding passkey entered? I wouldn't monkey (no-pun) with it too much, the Chimp Challenge starts tomorrow!


----------



## mm67

Quote:


> Originally Posted by *anubis44*
> 
> Oh, and just FYI, my PPD has suddenly skyrocketed from the measly ~9000 PPD with just the CPU cores, to 55513 PDD with the Radeon [email protected] core/1400MHz memory (flashed it using the HIS 7970 bios from techpowerup.com):
> 
> 
> 
> My 7950 is a Gigabyte Windforce 3, and she's pumping out the PPD while maintaining a respectable 64 degrees under 98% load. Not too bad - I might try overclocking the GPU a little more, perhaps with another bios. So far, though, the HIS 7970 bios seems to be a very good match for a casual GPU overclocker like me. Don't even need any overclocking software, since it's just a bios flash.


Your Gpu seems to be making 39k PPD which is exactly same than my 7950's running at 1050/1250 are making. I think the memory overclock doesn't help nothing, same as on Boinc. Downclocking your memory speed might help a bit if you get heat problems.


----------



## anubis44

Quote:


> Originally Posted by *Baskt_Case*
> 
> Totally awesome! Have you got your folding passkey entered? I wouldn't monkey (no-pun) with it too much, the Chimp Challenge starts tomorrow!


Yes, passkey entered, overclock.net team number 37726 entered and the machine is folding away. All good-to-go, thanks the help on this forum!


----------



## ZealotKi11er

For some reason my HD 6550D which was getting 2.7K PPD with Core16 is now getting only 0.7K PPD with Core17. Is this normal?


----------



## Starbomba

I'm wondering if the GPU VRAM speed affects Folding speeds, as it doesn't matter at all for BOINC. Anyone knows?

Also i've been having issues with Cote 17 WU's on my GTX 470. Whenever i get one, it never gets over 0% and my GPU shows no activity no matter i let it run. I have to go and delete it, disable the Beta flag, then i get Core 15 WU's and they run normally.

I'm running the 290.56 Beta drivers, and i've even tried running at stock speeds and even oervolting it on stock speeds.


----------



## Caleal

Quote:


> Originally Posted by *Starbomba*
> 
> I'm wondering if the GPU VRAM speed affects Folding speeds, as it doesn't matter at all for BOINC. Anyone knows?
> 
> Also i've been having issues with Cote 17 WU's on my GTX 470. Whenever i get one, it never gets over 0% and my GPU shows no activity no matter i let it run. I have to go and delete it, disable the Beta flag, then i get Core 15 WU's and they run normally.
> 
> I'm running the 290.56 Beta drivers, and i've even tried running at stock speeds and even oervolting it on stock speeds.


Overclocking GPU VRAM makes negligible, if any, difference in PPD.

As for the drivers, either upgrade to the 3xx series drivers, or if it is a dedicated folding rig, install the 266.58 drivers.
266.58 gives slightly better GPU folding performance on core_17 WUs, and uses MUCH fewer CPU resources.
Not so hot for gaming though.


----------



## Starbomba

Quote:


> Originally Posted by *Caleal*
> 
> Overclocking GPU VRAM makes negligible, if any, difference in PPD.
> 
> As for the drivers, either upgrade to the 3xx series drivers, or if it is a dedicated folding rig, install the 266.58 drivers.
> 266.58 gives slightly better GPU folding performance on core_17 WUs, and uses MUCH fewer CPU resources.
> Not so hot for gaming though.


So i can underclock the VRAM to the lowest Afterburner allows me to and there will be no problem?

I'll try the 266.58 drivers. It's a HTPC/BOINC rig, so i don't game on it.


----------



## tictoc

Quote:


> Originally Posted by *Starbomba*
> 
> So i can underclock the VRAM to the lowest Afterburner allows me to and there will be no problem?
> 
> I'll try the 266.58 drivers. It's a HTPC/BOINC rig, so i don't game on it.


When I down-clocked my VRAM, by 300 MHz, I saw roughly 200 fewer points per WU.


----------



## Starbomba

Quote:


> Originally Posted by *tictoc*
> 
> When I down-clocked my VRAM, by 300 MHz, I saw roughly 200 fewer points per WU.


So, downclocking the VRAM by 550 MHz would only give me a penalty of ~600 points per WU? Not bad considering the sheer value of Core 17 WU's, and the lower heat...


----------



## InsideJob

I can safely say these core 17 WU's saved my desire to maintain folding. Satisfaction of folding power has finally been obtained with my 7970. We really need to get all OCN folders with 79** gpu's folding with this. Especially for the CC


----------



## msgclb

I got a new XFX 7970 but unfortunately my with first WU I got a NaN.



Spoiler: My First NAN!



Quote:



> 15:43:43:WU00:FS00:FahCore 0x17 started
> 15:43:43:WU00:FS00ownloading project 7662 description
> 15:43:43:WU00:FS00:Connecting to fah-web.stanford.edu:80
> 15:43:43:WU00:FS00roject 7662 description downloaded successfully
> 15:43:44:WU00:FS00:0x17:*********************** Log Started 2013-04-12T15:43:43Z ***********************
> 15:43:44:WU00:FS00:0x17roject: 7662 (Run 22, Clone 24, Gen 84)
> 15:43:44:WU00:FS00:0x17:Unit: 0x00000085ff3d4835513920964b5da97c
> 15:43:44:WU00:FS00:0x17:CPU: 0x00000000000000000000000000000000
> 15:43:44:WU00:FS00:0x17:Machine: 0
> 15:43:44:WU00:FS00:0x17:Reading tar file state.xml
> 15:43:44:WU00:FS00:0x17:Reading tar file system.xml
> 15:43:44:WU00:FS00:0x17:Reading tar file integrator.xml
> 15:43:44:WU00:FS00:0x17:Reading tar file core.xml
> 15:43:44:WU00:FS00:0x17igital signatures verified
> 15:43:56:WU00:FS00:0x17:Completed 0 out of 2500000 steps (0%)
> 15:45:59:Server connection id=2 on 0.0.0.0:36330 from 127.0.0.1
> 15:49:27:WU00:FS00:0x17:Completed 50000 out of 2500000 steps (2%)
> 15:50:53:Server connection id=3 on 0.0.0.0:36330 from 192.168.1.143
> 15:54:23:Server connection id=4 on 0.0.0.0:36330 from 192.168.1.143
> 15:54:53:WU00:FS00:0x17:Completed 100000 out of 2500000 steps (4%)
> 16:00:24:WU00:FS00:0x17:Completed 150000 out of 2500000 steps (6%)
> 16:05:50:WU00:FS00:0x17:Completed 200000 out of 2500000 steps (8%)
> 16:11:15:WU00:FS00:0x17:Completed 250000 out of 2500000 steps (10%)
> 16:16:48:WU00:FS00:0x17:Completed 300000 out of 2500000 steps (12%)
> 16:17:30:Server connection id=5 on 0.0.0.0:36330 from 127.0.0.1
> 16:19:53:Server connection id=6 on 0.0.0.0:36330 from 127.0.0.1
> 16:20:37:Server connection id=7 on 0.0.0.0:36330 from 192.168.1.143
> 16:22:14:WU00:FS00:0x17:Completed 350000 out of 2500000 steps (14%)
> 16:27:41:WU00:FS00:0x17:Completed 400000 out of 2500000 steps (16%)
> 16:33:13:WU00:FS00:0x17:Completed 450000 out of 2500000 steps (18%)
> 16:38:39:WU00:FS00:0x17:Completed 500000 out of 2500000 steps (20%)
> 16:44:11:WU00:FS00:0x17:Completed 550000 out of 2500000 steps (22%)
> 16:49:37:WU00:FS00:0x17:Completed 600000 out of 2500000 steps (24%)
> 16:55:04:WU00:FS00:0x17:Completed 650000 out of 2500000 steps (26%)
> 17:05:07:WU00:FS00:0x17:NaNs found .. trying to pinpoint the NaN step via binary search... (this might take a while)
> 17:05:07:WU00:FS00:0x17:Trying to isolate NaN....searching [684844,700000]
> 17:05:56:WU00:FS00:0x17:Trying to isolate NaN....searching [692423,700000]
> 17:06:20:WU00:FS00:0x17:Trying to isolate NaN....searching [696212,700000]
> 17:06:32:WU00:FS00:0x17:Trying to isolate NaN....searching [698107,700000]
> 17:06:38:WU00:FS00:0x17:Trying to isolate NaN....searching [699054,700000]
> 17:06:41:WU00:FS00:0x17:Trying to isolate NaN....searching [699528,700000]
> 17:06:43:WU00:FS00:0x17:Trying to isolate NaN....searching [699765,700000]
> 17:06:44:WU00:FS00:0x17:Trying to isolate NaN....searching [699883,700000]
> 17:06:44:WU00:FS00:0x17:Trying to isolate NaN....searching [699942,700000]
> 17:06:44:WU00:FS00:0x17:Trying to isolate NaN....searching [699972,700000]
> 17:06:44:WU00:FS00:0x17:Trying to isolate NaN....searching [699987,700000]
> 17:06:45:WU00:FS00:0x17:Trying to isolate NaN....searching [699994,700000]
> 17:06:45:WU00:FS00:0x17:Trying to isolate NaN....searching [699998,700000]
> 17:06:45:WU00:FS00:0x17:Trying to isolate NaN....searching [700000,700000]
> 17:06:45:WU00:FS00:0x17:Unable to pinpoint NaN - likely to be non-deterministic, dumping results
> 17:06:45:WU00:FS00:0x17:ERROR:exception: NaNs detected in positions.0 0
> 17:06:45:WU00:FS00:0x17:Saving result file logfile_01.txt
> 17:06:45:WU00:FS00:0x17:Saving result file log.txt
> 17:06:45:WU00:FS00:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
> 17:06:45:WARNING:WU00:FS00:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
> 17:06:45:WU00:FS00:Sending unit results: id:00 state:SEND error:FAULTY project:7662 run:22 clone:24 gen:84 core:0x17 unit:0x00000085ff3d4835513920964b5da97c






I next got the following WU that I've since completed.



As you can see with this WU I only got around a 40k PPD.


----------



## Caleal

Quote:


> Originally Posted by *Starbomba*
> 
> So i can underclock the VRAM to the lowest Afterburner allows me to and there will be no problem?
> 
> I'll try the 266.58 drivers. It's a HTPC/BOINC rig, so i don't game on it.


No, you will see a drop in points. It is overclocking that makes little difference, because you can't really OC the VRAM enough to make a significant difference, and still be stable.

Edit: Having said that, It occurs to me that my experience with instability while OCing VRAM on cards that are running bleeding edge of stability GPU clocks may differ from what you may get with a more modest OC.

It doesn't take much to trip things up when you are running a reference design GTX580 at 1010mhz.


----------



## jomama22

I need a lot of help getting this fixed. I keep getting this error

The process cannot access the file because it is being used by another process: \".\\work\\02\\logfile_01.txt\

As soon as a core17 GPU wu finishes, this error keeps repeating itself over and over until all of my GPUs finish a wu and get this error.

I have tried installing/reinstalling rebooting, different drivers, all of it


----------



## tictoc

Quote:


> Originally Posted by *jomama22*
> 
> I need a lot of help getting this fixed. I keep getting this error
> 
> The process cannot access the file because it is being used by another process: \".\\work\\02\\logfile_01.txt\
> 
> As soon as a core17 GPU wu finishes, this error keeps repeating itself over and over until all of my GPUs finish a wu and get this error.
> 
> I have tried installing/reinstalling rebooting, different drivers, all of it


When you uninstalled [email protected] Control did you tell the uninstaller to remove the Data as well?



The logfile_01.txt file is a "live" file that is updated each time that folding slot logs an action. When you restart the client the previous log should be saved with a date and time stamp, and the current log will be named logfile_01.txt.



For some reason your WU logs are not being renamed. If you go in to your "work\01" folder and rename the logfile_01.txt to something else; the new log should be able to save instead of trying to write to the logfile_01.txt file.


----------



## labnjab

Just downloaded the latest drivers for my 570s (314.22) and got a 800-1200 ppd decrease on core 17. Has anyone else noticed this?


----------



## jomama22

Quote:


> Originally Posted by *tictoc*
> 
> When you uninstalled [email protected] Control did you tell the uninstaller to remove the Data as well?
> 
> 
> 
> The logfile_01.txt file is a "live" file that is updated each time that folding slot logs an action. When you restart the client the previous log should be saved with a date and time stamp, and the current log will be named logfile_01.txt.
> 
> 
> 
> For some reason your WU logs are not being renamed. If you go in to your "work\01" folder and rename the logfile_01.txt to something else; the new log should be able to save instead of trying to write to the logfile_01.txt file.


I deleted data each time I tried reinstallation. This didn't start happening until an few days ago.

Thanks for this, I will try this when I get home tonight. Should I be concerned with I'll effects of renaming the log file_01.txt? Also, any idea what programs could be using the file?


----------



## tictoc

There should be no ill effects. The only program that should be using the file is FAH Client.

You can use Process Explorer to see what is accessing logfile_01.txt



Spoiler: To see what is accessing logfile_01.txt with Process Explorer folow these steps:



- Start Process Explorer

- Click "Find" to search for logfile_01.txt

- Type "logfile_01.txt" to search for the file

- These are the only two processes that should be accessing the file




If you are still having issues you should start a new thread, since this problem is more of a general [email protected] error and not a core_17 error.


----------



## jomama22

Quote:


> Originally Posted by *tictoc*
> 
> There should be no ill effects. The only program that should be using the file is FAH Client.
> 
> You can use Process Explorer to see what is accessing logfile_01.txt
> 
> 
> 
> Spoiler: To see what is accessing logfile_01.txt with Process Explorer folow these steps:
> 
> 
> 
> - Start Process Explorer
> 
> - Click "Find" to search for logfile_01.txt
> 
> - Type "logfile_01.txt" to search for the file
> 
> - These are the only two processes that should be accessing the file
> 
> 
> 
> 
> If you are still having issues you should start a new thread, since this problem is more of a general [email protected] error and not a core_17 error.


I really appreciate all the help. +rep

edit: so i just did process explorer, turns out, each gpu/cpu has 2 entries, i dont know if this is how it is suppose to be or not.


----------



## ZealotKi11er

So my second HD 7970 decided to switch to Core 16. Is this normal?


----------



## jomama22

Quote:


> Originally Posted by *ZealotKi11er*
> 
> So my second HD 7970 decided to switch to Core 16. Is this normal?


Same happened to me on all 3 of my 7970s. I think they just refilled then server with more because I am back to core 17.


----------



## GarTheConquer

One of my 7970's is strolling along at 75% usage on a Core 16 as well


----------



## [CyGnus]

Were are the core 17 wus, i've got 3 core 16's and the PPD is just awful


----------



## ZealotKi11er

How can u delete Work Queue?


----------



## jomama22

Quote:


> Originally Posted by *ZealotKi11er*
> 
> How can u delete Work Queue?


No real "easy" way, but you have 2 options: delete the slots and then put them back or uninstall fah and delete data, then reinstall


----------



## jomama22

aaaannnnnddddd back to core 16....this is ridiculous.

Everybody haz takn teh core 17


----------



## tictoc

I had one core_16 WU. When it finished I received a core_17, so it looks like the beta project is still running.


----------



## ZealotKi11er

Getting Core 17 now.


----------



## labnjab

They must be starting to run out of them, been getting a mix of core 15 and core 17. That's not fair at the start of the cc, lol. I get 13k ppd less per card on 807x then I do on 7662


----------



## tictoc

If they run out it will be a nightmare for AMD GPU folders. I see my AMD GPU folding thread getting hammered with questions if the beta project ends.


----------



## jomama22

Quote:


> Originally Posted by *labnjab*
> 
> They must be starting to run out of them, been getting a mix of core 15 and core 17. That's not fair at the start of the cc, lol. I get 13k ppd less per card on 807x then I do on 7662


Yup. After getting 3 core16 than three core17, I am back to core16.

Its really not worth it to run 3 7970s at 7000 PPD each...


----------



## jomama22

just tried reinstalling [email protected] to see if i would get core17, but alas, i did not.

Unless these core17s come back, cant really waste the energy for 21k ppd over 3 7970s. ill just set the cpu to 12 in vm and let it go.


----------



## ZealotKi11er

Quote:


> Originally Posted by *jomama22*
> 
> Yup. After getting 3 core16 than three core17, I am back to core16.
> 
> Its really not worth it to run 3 7970s at 7000 PPD each...


True that when my GTX560M get 12K PPD lol.


----------



## Caleal

I had to downclock my GTX580 from 1010 to 990mhz to keep core_15 WUs from failing when I get them.


----------



## jomama22

Quote:


> Originally Posted by *Caleal*
> 
> I had to downclock my GTX580 from 1010 to 990mhz to keep core_15 WUs from failing when I get them.


same way with me and my 7970s and core16, had to drop the clocks.


----------



## nova4005

I had to drop the clocks with my 7950 and 7970. I think something had corrupted my gpu drivers and I had to uninstall and reinstall catalyst. I am only getting 50% usage out of both gpus now, does anyone know what I can do to fix that or are the core 16 units like this with amd? I am on the 11293 wu if that matters.


----------



## jomama22

Quote:


> Originally Posted by *nova4005*
> 
> I had to drop the clocks with my 7950 and 7970. I think something had corrupted my gpu drivers and I had to uninstall and reinstall catalyst. I am only getting 50% usage out of both gpus now, does anyone know what I can do to fix that or are the core 16 units like this with amd? I am on the 11293 wu if that matters.


Core16 needs 1 cpu core per gpu to run in the 90%+ area.


----------



## nova4005

I had one core for each gpu and it was still low on gpu usage so I moved my 3770k down to smp 4 to see what would happen and I still get 40-50% usage. Is there anything else you can think of that may help?


----------



## valvehead

Quote:


> Originally Posted by *jomama22*
> 
> Core16 needs 1 cpu core per gpu to run in the 90%+ area.


That's part of the problem, but Core 16 apparently also needs a certain combination of driver and SDK versions to work optimally. See this thread for details.


----------



## tictoc

Quote:


> Originally Posted by *valvehead*
> 
> That's part of the problem, but Core 16 apparently also needs a certain combination of driver and SDK versions to work optimally. See this thread for details.


valvehead is correct. The only good thing about running the custom install or modded drivers is that the core_16 and core_17 WUs will run fully. Though, you will see increased CPU usage with both work units.


----------



## Starbomba

Quote:


> Originally Posted by *nova4005*
> 
> I had one core for each gpu and it was still low on gpu usage so I moved my 3770k down to smp 4 to see what would happen and I still get 40-50% usage. Is there anything else you can think of that may help?


Try using the 13.2 beta drivers from the AMD website. I'm using the 13.2 beta 7 modded drivers for BOINC from DarkRyder and i'm getting 97% usage on both my 7950 and 7970. I'm also running a 6-thread SMP client.


----------



## Evil Penguin

Quote:


> Originally Posted by *Starbomba*
> 
> Try using the 13.2 beta drivers from the AMD website. I'm using the 13.2 beta 7 modded drivers for BOINC from DarkRyder and i'm getting 97% usage on both my 7950 and 7970. I'm also running a 6-thread SMP client.


I'm guessing core 17 would be using a full CPU core for you.


----------



## msgclb

The Folding Forum has a speculation that the lack of Core_17 could be caused my your video driver being auto-updated.

*Project 7662 For FahCore_17*

Quote:


> To those that have suddenly found their Core_17 PPD plunging -- Have your video drivers been auto-updated by either Microsoft update or the new Nvidia driver auto-update? I'm just speculating as to the potential cause.


When I got up this morning I fired up my 7970 rig and I'm now working on my second Core 17 WU.

Also if you've reinstalled the V7 client make sure you add the beta flag.

Since I've mentioned video drivers has there been any change to this note that's on the OP?

Quote:


> Just a few recommendation for AMD folders:
> I recommend Catalyst 13.1 (unmodified) for the HD 5000/6000 series (lower CPU usage than older versions).
> The HD 7000 series I recommend Catalyst 13.2 beta (higher GPU usage) (HD 5000/6000 series have a higher CPU usage bug it seems).


I'm using the Cat 13.2 beta with my 7970.


----------



## Starbomba

Quote:


> Originally Posted by *Evil Penguin*
> 
> I'm guessing core 17 would be using a full CPU core for you.


I set the CPU usage manually for my SMP slot, instead of leaving it at -1. Though i'm getting a full thread used as my CPU is on full load with both a core 16 and core 17 WU's.


----------



## jomama22

Try turning off cfx in CCC, shut down, remove the bridge. What drivers are u running?


----------



## jomama22

Quote:


> Originally Posted by *Starbomba*
> 
> I set the CPU usage manually for my SMP slot, instead of leaving it at -1. Though i'm getting a full thread used as my CPU is on full load with both a core 16 and core 17 WU's.


the problem with -1 SMP is it just follows what the CPU wu tells it. So for those with multi GPI, it is better to set it manually.

As far as drivers are concerned, I have been running 13.3 with no problems at all with core17 or core16. Also, I did receive 3 core17 and when they all finished, they switched to core16, so I know it most likely isn't a driver issue.


----------



## jomama22

Quote:


> Originally Posted by *msgclb*
> 
> The Folding Forum has a speculation that the lack of Core_17 could be caused my your video driver being auto-updated.
> 
> *Project 7662 For FahCore_17*
> 
> When I got up this morning I fired up my 7970 rig and I'm now working on my second Core 17 WU.
> 
> Also if you've reinstalled the V7 client make sure you add the beta flag.
> 
> Since I've mentioned video drivers has there been any change to this note that's on the OP?
> 
> I'm using the Cat 13.2 beta with my 7970.


I have not had problems with the PPD on core17, just that I don't get them anymore haha.


----------



## aas88keyz

Welcome back wonderful wonderful core_15's. Nvidia is back on top!







LOL


----------



## nova4005

Quote:


> Originally Posted by *valvehead*
> 
> That's part of the problem, but Core 16 apparently also needs a certain combination of driver and SDK versions to work optimally. See this thread for details.


Quote:


> Originally Posted by *tictoc*
> 
> valvehead is correct. The only good thing about running the custom install or modded drivers is that the core_16 and core_17 WUs will run fully. Though, you will see increased CPU usage with both work units.


Quote:


> Originally Posted by *Starbomba*
> 
> Try using the 13.2 beta drivers from the AMD website. I'm using the 13.2 beta 7 modded drivers for BOINC from DarkRyder and i'm getting 97% usage on both my 7950 and 7970. I'm also running a 6-thread SMP client.


Thank you and +rep. I am now getting 95%+ on both gpu's.







cpu usage without smp is around 20-25% though so I am going to try for smp 6 and see how that works.


----------



## ZealotKi11er

No more Core 17 for me. Look like i will stop folding with HD 7970s considering they kill CPU PPD and only get 14K.


----------



## joker927

^^ Agreed


----------



## jomama22

yup. 7k ppd a piece is not worth the trouble or electricity.

is it wishful thinking that maby they are going to start incorporating those new core17's that supposedly double ppd?

fat chance...


----------



## [CyGnus]

Quote:


> Originally Posted by *jomama22*
> 
> yup. 7k ppd a piece is not worth the trouble or electricity.


Yup i am on the same boat will only fold the GPU with core 17 or any newer if the PPD is decent if not SMP will do


----------



## GarTheConquer

Yeah, deleting GPU slots now.


----------



## [CyGnus]

Hope tomorrow the 17's are back, now i am going to sleep


----------



## gamer11200

Just finished my Core 17 and got a 16.


----------



## proteneer

Remember that core 17 was in beta testing mode. We make no guarantees to the availability of beta WU's.

But we have more exciting stuff coming =)


----------



## PR-Imagery

Yep, high point units are never really guaranteed.


----------



## ZealotKi11er

Quote:


> Originally Posted by *PR-Imagery*
> 
> Yep, high point units are never really guaranteed.


They are just good for AMD GPUs. Nvidia GPUs dont benefit much.


----------



## ASSSETS

And this happen right on Chimp challenge..
proteneer, we need this stuff NOW!


----------



## Caleal

Quote:


> Originally Posted by *ZealotKi11er*
> 
> They are just good for AMD GPUs. Nvidia GPUs dont benefit much.


20k ppd isn't much?


----------



## labnjab

Quote:


> Originally Posted by *ZealotKi11er*
> 
> They are just good for AMD GPUs. Nvidia GPUs dont benefit much.


My 570s get 13k more ppd each on core 17 then they do on core 15 807x so I think that's a great benefit for nvidia


----------



## msgclb

My 7970 completed a couple of Core 17 WUs but then I got a Core 16 11292 WU that has a 2k PPD.

I've now noticed that my GPU usage is only 45%. The video driver is 13.2 beta and I'm not using the CPU.


----------



## Gungnir

Quote:


> Originally Posted by *Caleal*
> 
> 20k ppd isn't much?


It's a good boost, but not as good as the ~40-50k that OC'd 7970s gain.

Does anyone know when the new beta WUs come out? It'd be wonderful if they were released during the Chimp Challenge...


----------



## [CyGnus]

We need wus that use OpenCL and both Nvidia and AMD support it though if they are CUDA amd users will take a big hit in PPD maybe they will find a way to send opencl wus to amd users and cuda to nvidia idk everything is possible and mean while keep everyone happy


----------



## bfromcolo

Now that it appears the WU 17 units are over or hard to come by, what is the optimum AMD software config for a 7850 to run both WU16 and WU17? Seems like my optimal 16 config (13.2 beta 6 with old SDK) resulted in high CPU usage when running a 17. But my optimum 17 config (13.2 whql with included sdk) experiences an 80% PPD loss running a 16, not to mention using a CPU core.

Maybe I'll just give up and run SMP only for the CC, experimenting with drivers isn't how I wanted to spend Sunday. But my ~26K PPD is like 10K with my GPU crippled or unused.


----------



## labnjab

I've still been getting core 17, since last night one card always has core 17 and the other core 15 while occasionally getting core 17


----------



## valvehead

I had two 7662 units yesterday, but only core 15 units since then.

The server stat page has consistently shown about 100 units available on 171.67.108.149, but that's obviously not nearly enough to keep up with the recent demand.


----------



## labnjab

I'm on 2 7662 now and have pretty much had them all day.


----------



## WLL77

Have been off and on with the 17 units. Was getting them exclusively, however after Friday night have been getting a mixed bag.


----------



## Doc_Gonzo

I'm getting them on one machine but not the other. . . weird!


----------



## Caleal

Quote:


> Originally Posted by *Doc_Gonzo*
> 
> I'm getting them on one machine but not the other. . . weird!


Same here, unfortunately it is my GTX470 that is getting them, instead of my GTX580 I use in the TC.


----------



## Starbomba

Quote:


> Originally Posted by *Caleal*
> 
> Same here, unfortunately it is my GTX470 that is getting them, instead of my GTX580 I use in the TC.


How'd you do that? My 470 absolutely refuses to run them, so i've been on core 15's and core 16's.


----------



## Caleal

Quote:


> Originally Posted by *Starbomba*
> 
> How'd you do that? My 470 absolutely refuses to run them, so i've been on core 15's and core 16's.


What drivers are you running?

My 470 chugs along at around 33-34k ppd running at 840mhz.


----------



## goodtobeking

Hey guys I just added in a 7970 to my machines. First time I have folded for a minute and Im late go figure.

But anyway, what flags what be best for my backup machine. Q6600 with a 7970. Im looking for, mainly, the Core16 WUs I beleive.

Thanks in advance, and sorry I know this has been answered a million times but I cant seem to find it and Im in a hurry.


----------



## Wheezo

Add:

Name: client-type
Value: beta

And hope that the servers treat you well by giving you some Core_17


----------



## goodtobeking

I did that and I got this error

On client "local" 127.0.0.1:36330: Invalid value for option 'client-type'
Caused by: 'beta\n' not in ClientType enumeration


----------



## martinhal

Quote:


> Originally Posted by *goodtobeking*
> 
> I did that and I got this error
> 
> On client "local" 127.0.0.1:36330: Invalid value for option 'client-type'
> Caused by: 'beta\n' not in ClientType enumeration


Did you add the flag in the GPU folding slot ?


----------



## Krusher33

Quote:


> Originally Posted by *goodtobeking*
> 
> I did that and I got this error
> 
> On client "local" 127.0.0.1:36330: Invalid value for option 'client-type'
> Caused by: 'beta\n' not in ClientType enumeration


beta\n? Should be just beta.


----------



## runs2far

Quote:


> Originally Posted by *goodtobeking*
> 
> I did that and I got this error
> 
> On client "local" 127.0.0.1:36330: Invalid value for option 'client-type'
> Caused by: 'beta\n' not in ClientType enumeration


Looks like you copy pasted a line shift or something.
Try typing beta manually into that field.


----------



## goodtobeking

Yeah must have been a typo on my end. I ended up getting it to work from what I can tell. Hopefully Im not too late for those sweet Core 17 WUs.

EDIT: right after I got it all setup, my internet went out last night. But not everything seems to be working fine and I am getting almost 40k PPD with that rig

Thanks guys!!


----------



## Starbomba

I have been getting Core 17 WU's for a pretty good while, hope they last


----------



## FIX_ToRNaDo

What I notice is that it cycles between core 17s and core 16s, is this normal? I've added the beta and the ati vendor thingy under the gpu slot, with the latest [email protected] client.


----------



## Atomfix

Quote:


> Originally Posted by *Starbomba*
> 
> I have been getting Core 17 WU's for a pretty good while, hope they last


Don't jinx it now! lol, Everytime I notice my GPU is folding a FahCore 16, I stop and delete it, and FahCore 17 gets downloaded straight after lol


----------



## bfromcolo

Quote:


> Originally Posted by *Atomfix*
> 
> Don't jinx it now! lol, Everytime I notice my GPU is folding a FahCore 16, I stop and delete it, and FahCore 17 gets downloaded straight after lol


I have considered doing this but there is some algorithm that stops you from collecting QRB points after you fail 10 out of some number of units, and then you need to earn your way back to respectability with successful units. I think it is best to just call it the luck of the draw and process whatever you get.


----------



## tictoc

In order to qualify for QRB you must fold a minimum of 10 WU's, and maintain an 80% success rate on SMP and QRB GPU units. If you have just started folding, you could easily drop below the 80% success rate.


----------



## Krusher33

You shouldn't delete projects anyways. They need the results from them units.


----------



## Comp4k

Bleh I failed 3 core 17 units due to NaNs. I decreased my overclock just in case it was unstable.


----------



## aas88keyz

Quote:


> Originally Posted by *bfromcolo*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Atomfix*
> 
> Don't jinx it now! lol, Everytime I notice my GPU is folding a FahCore 16, I stop and delete it, and FahCore 17 gets downloaded straight after lol
> 
> 
> 
> I have considered doing this but there is some algorithm that stops you from collecting QRB points after you fail 10 out of some number of units, and then you need to earn your way back to respectability with successful units. I think it is best to just call it the luck of the draw and process whatever you get.
Click to expand...

Quote:


> Originally Posted by *tictoc*
> 
> In order to qualify for QRB you must fold a minimum of 10 WU's, and maintain an 80% success rate on SMP and QRB GPU units. If you have just started folding, you could easily drop below the 80% success rate.


Quote:


> Originally Posted by *Krusher33*
> 
> You shouldn't delete projects anyways. They need the results from them units.


THIS^ I can vouch that I have been working at optimizing my folding for a long time now and could not figure out why my smp client would not make more than 11kppd the whole time. I was doing everything right except I had changed clients, going back and forth with v6 and v7, Making updates and trying all scenarios. Sometimes the clients would drop the wu and other times I would delete them on my own. Finally I settled on just plain folding for a while with no changes or interruptions. I lost count of how many wu's i did in a row but I finally went from 11kppd on my FX-8120 to 26kppd. Now I don't get any wu's less than 19kppd for smp on 7 cpu cores and 1 core will rule my gpu units, either 15 or 17, I will take both. For science, of course, and for reputation I now see. So yes lets not remove the the work they are sending us cause it all is just as important. And if that is not enough continue to consider the success of you QRB units.

Keep on foldin'!


----------



## Caleal

Quote:


> Originally Posted by *Atomfix*
> 
> Don't jinx it now! lol, Everytime I notice my GPU is folding a FahCore 16, I stop and delete it, and FahCore 17 gets downloaded straight after lol












Never do that!


----------



## joker927

I have been doing it too but not for points. Even if I set smp to one core less than max to allow that cpu to be used if/when I get a non core-17 WU, my 7950 drops to 50% gpu usage with those WUs and takes a LONG time to complete. I have completed thousands of WUs and I feel little guilt deleting a few that don't work well on my system.


----------



## Krusher33

*sigh* Yeah. The 7000's are not optimized for core 16's and not only that, the drivers after 12.8 are not very nice. The SDK after 12.8 drivers did a beating on all AMD GPU's.

But please keep this in mind when you delete a core 16 units: The core17's are just beta units and aren't really going towards science. The core16 has been assigned to you and Stanford is expecting to get result back for research. I suspect that the reason the sudden randomness of core16's and core17's is because SO MANY people suddenly switched to core17's and they weren't getting many core16's worked on.


----------



## bfromcolo

I asked this once before, but I'll try again. What is the optimum configuration for a 7850 assuming there will be a random mix of 16 and 17 WUs? Seemed like 17 worked best for me on 13.2 Beta 6 with the latest SDK, while 16 was best on 13.2 WHQL with the older SDK. And I guess setting a core aside is necessary if I plan to leave this unattended. I am going to guess that if I am sacrificing a core anyway I might as well run the old SDK and try to get the 16s worked as fast as possible?


----------



## Krusher33

I think using the older SDK would be best? I don't have a GPU anymore atm so i can't test to see for myself.

My theory is that even though there's a bit of a drop in PPD on Core17's with the older SDK but at least it gets the Core16's done faster in hopes you'd land on a core17 more often.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> I think using the older SDK would be best? I don't have a GPU anymore atm so i can't test to see for myself.
> 
> My theory is that even though there's a bit of a drop in PPD on Core17's with the older SDK but at least it gets the Core16's done faster in hopes you'd land on a core17 more often.


Catch 22 i guess to get good ppd with core 17 you need newer sdk and drivers and to keep the lower gpu usage.


----------



## Krusher33

Right but I can't remember the PPD difference when I tried the new drivers vs your modded ones.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> Right but I can't remember the PPD difference when I tried the new drivers vs your modded ones.


10-20K i thk and my modded ones dont have the lower cpu usage like the newer ones.


----------



## Krusher33

Sucks but it sounds like we would have to swap out drivers each time the units changed to get the most out of it.


----------



## bfromcolo

Quote:


> Originally Posted by *Krusher33*
> 
> Sucks but it sounds like we would have to swap out drivers each time the units changed to get the most out of it.


Thats what I was afraid of, which assumes your there watching every time a work unit completes to reconfigure if necessary.


----------



## tictoc

Quote:


> Originally Posted by *Bal3Wolf*
> 
> 10-20K i thk and my modded ones dont have the lower cpu usage like the newer ones.


With the latest drivers the TPF on my 7970 is double the TPF of the modded or custom drivers on the core_16 WU's. With the older SDK, core_17 WU'S get 7k fewer PPD.

I would recommend the modded drivers, since a core_16 WU takes more than 5 hours to complete on the latest drivers. That adds up to quite a few more points than the 7k you lose on the core_17 WU's.
If every other WU is a core_16, which is what I have been getting, you should see max PPD on the older SDK drivers.


----------



## mosi

Hmm, what's with all this core_16 and _17? somehow my gpu's are munching core_15 wu's right now. Am I doing something wrong?
%AppData%\Roaming\FAHClient\cores\www.stanford.edu\~pande\Win32\AMD64\NVIDIA\Fermi just holds core_15.fah and nothing else so my rig probably has never seen _16 or _17.

I've just added two slots and specified client-type = beta for both of them. client is 7.3.6


----------



## aas88keyz

Quote:


> Originally Posted by *mosi*
> 
> Hmm, what's with all this core_16 and _17? somehow my gpu's are munching core_15 wu's right now. Am I doing something wrong?
> %AppData%\Roaming\FAHClient\cores\www.stanford.edu\~pande\Win32\AMD64\NVIDIA\Fermi just holds core_15.fah and nothing else so my rig probably has never seen _16 or _17.
> 
> I've just added two slots and specified client-type = beta for both of them. client is 7.3.6


No worries about core_15. As far as I know core_15 is nvidia cards and core_16 is AMD gpu's. They share beta units, core_17.


----------



## joker927

Anyone have a link to proof that beta units offer less to no science benefit?


----------



## Wheezo

http://fah-web.stanford.edu/cgi-bin/fahproject.overusingIPswillbebanned?p=7662

Nothing "medical" about the 7662.


----------



## Anthony20022

Quote:


> Originally Posted by *joker927*
> 
> Anyone have a link to proof that beta units offer less to no science benefit?


They definitely have scientific benefit, otherwise Pande lab wouldn't waste time and computational power making and distributing them. Core 17 is the future of GPU folding, so its very important from a software perspective. According to the link Wheezo provided, Project 7662's goal is essentially testing the protein folding results obtained from another process for accuracy, so it certainly has scientific and medical implications.


----------



## mosi

Quote:


> Originally Posted by *aas88keyz*
> 
> No worries about core_15. As far as I know core_15 is nvidia cards and core_16 is AMD gpu's. They share beta units, core_17.


Ah good! I was wondering if that _15 core was something from an ancient past or so








Thanks for the explanation


----------



## PR-Imagery

Okay so I've swapped out my 7970 for a 580, its in with a 6670 for a third monitor. I can fold core15 units fine but all core17 units fail immediately after download.

Any ideas?

*The index trick seemed to work, units stopped failing, but got zero gpu load but unit percentage is running


----------



## Krusher33

Quote:


> Originally Posted by *PR-Imagery*
> 
> *The index trick seemed to work, units stopped failing, but got zero gpu load but unit percentage is running


That's a crash. You'll need to restart the work by pausing/restarting it.


----------



## Majorhi

Does this look right? Running 13.1. 1000/1115 on 6870's. One core set aside for each GPU.


----------



## PR-Imagery

Quote:


> Originally Posted by *Krusher33*
> 
> Quote:
> 
> 
> 
> Originally Posted by *PR-Imagery*
> 
> *The index trick seemed to work, units stopped failing, but got zero gpu load but unit percentage is running
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That's a crash. You'll need to restart the work by pausing/restarting it.
Click to expand...

No dice, tried with 295.73* and it worked but lost Aero, which made everything laggy as hell.


----------



## [CyGnus]

Quote:


> Originally Posted by *Majorhi*
> 
> Does this look right? Running 13.1. 1000/1115 on 6870's. One core set aside for each GPU.


With core 17 you dont need to dedicate any core you can do SMP with all cores and GPU with no problems


----------



## Majorhi

I shall test your theory on the SMP folding.


----------



## Krusher33

Quote:


> Originally Posted by *PR-Imagery*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> Quote:
> 
> 
> 
> Originally Posted by *PR-Imagery*
> 
> *The index trick seemed to work, units stopped failing, but got zero gpu load but unit percentage is running
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> That's a crash. You'll need to restart the work by pausing/restarting it.
> 
> Click to expand...
> 
> No dice, tried with 253.97 and it worked but lost Aero, which made everything laggy as hell.
Click to expand...

Lost Aero? I hate to say it but something has to be unstable then.


----------



## PR-Imagery

It was disabled after installing 295.73 and couldn't re-enable, switching back to the 7970 for now tho.
It was actually folding a core17 unit tho, which it completed, but without aero the desktop is completely unusable.


----------



## Krusher33

Sounds like a mess I wouldn't want to deal with. I'm out of ideas on it sadly. The last thing I can think of is maybe driver conflict or something.


----------



## PR-Imagery

Well I did have it in with a 6670, I know it can be done, it was working fine with core15 units, but guess this particular setup doesn't want to play nice with core17.


----------



## Bal3Wolf

all my boxes are getting core 17 only workunits havet seen any core16 ones.


----------



## Doc_Gonzo

Quote:


> Originally Posted by *Bal3Wolf*
> 
> all my boxes are getting core 17 only workunits havet seen any core16 ones.


Both my computers have been getting them consistently for the last two or three days now. Hopefully, they've made more available


----------



## Caleal

Quote:


> Originally Posted by *PR-Imagery*
> 
> It was disabled after installing 295.73 and couldn't re-enable, switching back to the 7970 for now tho.
> It was actually folding a core17 unit tho, which it completed, but without aero the desktop is completely unusable.


I also had issues with instantly failing WUs with the 295.xx drivers

Install the 266.58 or 266.66 drivers, they give the best performance on both core_15 and core_17 WUs with a 580, and core_17.exe will only use around 1% of your CPU, with bumps up to 3-4% for a few seconds each frame.

If you use the system for gaming, and don't plan on overclocking your 580 to >1000 mhz, the 3xx drivers work, but core_17.exe will use up an entire CPU core.


----------



## PR-Imagery

Oh it worked with 295, it failed with the 266, I had aero with 266 but it wouldn't fold.

I had originally installed 266 and after doing the index tweaks it stopped failing units but didn't get any gpu load. I uninstalled, did a sweep, and installed 295.73, but aero stopped working. I was able to actually complete a core17 unit with 295, but without aero the system is un-useable.


----------



## FIX_ToRNaDo

Same here, been seeing 17s for 3-4 days straight.
Quote:


> Originally Posted by *Majorhi*
> 
> I shall test your theory on the SMP folding.


If I remember well, with the 13.3 beta drivers you should see some improvement with the core 17 units.


----------



## Henke1k

Maybe don't look at my Wu using 25% of my processor with 0x15 it only uses between 2~3% but now i had to stop SMP4.



Any sugestion ?


----------



## Caleal

Quote:


> Originally Posted by *PR-Imagery*
> 
> Oh it worked with 295, it failed with the 266, I had aero with 266 but it wouldn't fold.
> 
> I had originally installed 266 and after doing the index tweaks it stopped failing units but didn't get any gpu load. I uninstalled, did a sweep, and installed 295.73, but aero stopped working. I was able to actually complete a core17 unit with 295, but without aero the system is un-useable.


When you install new nvidia drivers, are you checking the box to do a "clean" instal?


----------



## PR-Imagery

Yep.


----------



## 47 Knucklehead

Bring back 8057!


----------



## valvehead

Quote:


> Originally Posted by *PR-Imagery*
> 
> Yep.


Did you rerun WEI (Windows Experience Index)?

I have found that Windows wants WEI to be refreshed every time video drivers are changed. If you don't run it manually, Windows will pick some random time to run it (whether or not the system is idle). If it happens to run WEI while you are folding, Windows may decide that your computer can't handle Aero and will disable it.


----------



## PR-Imagery

Yep. Tried forcing it through the registry as well.


----------



## Gungnir

Quote:


> Originally Posted by *FIX_ToRNaDo*
> 
> Same here, been seeing 17s for 3-4 days straight.
> If I remember well, with the 13.3 beta drivers you should see some improvement with the core 17 units.


I seem to have gotten a couple thousand PPD extra with the April 16 unofficial beta over 13.3 b3, as well, though I can't give exact numbers. Indecently, it also seems to run a good bit cooler now, so I can comfortably fold at 1000/1600


----------



## Krusher33

Quote:


> Originally Posted by *Henke1k*
> 
> Maybe don't look at my Wu using 25% of my processor with 0x15 it only uses between 2~3% but now i had to stop SMP4.
> 
> 
> 
> Any sugestion ?


Nvidia cards uses CPU resource with x17's unfortunately.


----------



## labnjab

Quote:


> Originally Posted by *Krusher33*
> 
> Nvidia cards uses CPU resource with x17's unfortunately.


Yeah, my 570s use ~12% cpu usage each on core 17.


----------



## valvehead

Quote:


> Originally Posted by *labnjab*
> 
> Yeah, my 570s use ~12% cpu usage each on core 17.


Does 266.58 not work with your 570s?


----------



## Caleal

With 266.58 or 266.66 drivers, CPU usage by core_17.exe will drop to around 1%, AND GPU ppd will go up.


----------



## labnjab

Quote:


> Originally Posted by *valvehead*
> 
> Does 266.58 not work with your 570s?


To be honest, I haven't tried. The main reason is its a gaming rig and I don't want to downgrade my drivers, but I always run smp6 on my cpu (even before core 17), so my 570's get 2 free threads anyways, so there's really no need for me to drop the gpu usage


----------



## valvehead

Quote:


> Originally Posted by *labnjab*
> 
> To be honest, I haven't tried. The main reason is its a gaming rig and I don't want to downgrade my drivers, but I always run smp6 on my cpu (even before core 17), so my 570's get 2 free threads anyways, so there's really no need for me to drop the gpu usage


That's understandable. Swapping this late in the game is probably not worth the hassle.

It's a pain to swap to drivers too often, so I generally stick with whatever works for as long as possible. However, I figured that I would switch drivers this time since I wouldn't be using my main PC during the 10 day event. The 266.58 driver raised the GPU PPD by 2k, and the CPU gained about 8k when I switched it from SMP7 to SMP8.


----------



## aas88keyz

The day they opened these beta units to the public is the day I set up my pc to fold them. I did it as successfully as a 560 Ti 448 could. I was making half way decent points. But I wanted to see what I would get with the core_15's and smp 8 on my FX-8120. I tried as many different ways to fold gpu and full smp but was unsuccessful (that is another thread.). I finally attempted to switch back to 7 smp and changed client-type for gpu back to beta. That should be it right? just change the client-type? Yet I am not having a shortage of core_17's I haven't got a single one in 4 days at least. What am I doing wrong? are the completely dried up? Help would be appreciated. Thanks.


----------



## bfromcolo

I have had problem returning results for the last 2 completed x17 WUs. Send errors. Both requiring restarting the client.to resolve. The system is also folding SMP and I have had no problems returning the results of those units. Throwing away points when the thing sits idle on a send error for hours.


----------



## proteneer

core17 WU p7662 stopped in preparation of something.


----------



## Anthony20022

Quote:


> Originally Posted by *proteneer*
> 
> core17 WU p7662 stopped in preparation of something.


Interested to see what it will be!


----------



## joker927

I have a PCIe 16x slot free. Should I buy a GPU now or wait until 17 goes public and ppd is better understood?


----------



## labnjab

I had core 17 on both gpus throughout the CC and ever since it ended I've been back to core 15, lol.


----------



## ZDngrfld

Quote:


> Originally Posted by *labnjab*
> 
> I had core 17 on both gpus throughout the CC and ever since it ended I've been back to core 15, lol.


Didn't read three posts back?







Quote:


> Originally Posted by *proteneer*
> 
> core17 WU p7662 stopped in preparation of something.


----------



## labnjab

Quote:


> Originally Posted by *ZDngrfld*
> 
> Didn't read three posts back?


Guess not, lol. I can't wait to see whats next.


----------



## 47 Knucklehead

Well, looks like it's time to retire yet another older PC from my Folding Farm. My old dual core E5200 OC'd @ 2.8GHz. Ever since Core 17 came out, it has been folding for CRAP. I mean it never was a GREAT machine, only pulling about 2200-2500 PPD, but now, it's totally worthless at 589 PPD ... and that is under Linux!

Rest in Peace "Little Jr." it was good having you in the Folding Farm.


----------



## Krusher33

Got old and tired eh? Is it going to be in a nursing home somewhere or what?


----------



## 47 Knucklehead

Quote:


> Originally Posted by *Krusher33*
> 
> Got old and tired eh? Is it going to be in a nursing home somewhere or what?


Hehehe, nope. It's still on my work desk as my 2nd computer. God knows how handy it is just having too computers up at the same time. It really aids me when I do customer software upgrades. I put their old software and configuration on that machine and their new software and config on my other machine, and I can pull up both versions side by side and verify that the upgrade translation actually did everything.









I'm just surprised that in a period if just 1 month, the PPD is now only 10% of what it once was. Stanford really must be trying to get rid of old hardware by lowering PPD, even though they had the same computational power as before.


----------



## Krusher33

Yeah, I've converted my old laptop that lost its screen into a mini desktop of sorts. Really nice to have especially when overclocking the main computer.


----------



## PimpSkyline

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> Got old and tired eh? Is it going to be in a nursing home somewhere or what?
> 
> 
> 
> Hehehe, nope. It's still on my work desk as my 2nd computer. God knows how handy it is just having too computers up at the same time. It really aids me when I do customer software upgrades. I put their old software and configuration on that machine and their new software and config on my other machine, and I can pull up both versions side by side and verify that the upgrade translation actually did everything.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I'm just surprised that in a period if just 1 month, the PPD is now only 10% of what it once was. Stanford really must be trying to get rid of old hardware by lowering PPD, even though they had the same computational power as before.
Click to expand...

I have noticed that, i miss the 8057 WU and now they take away the 7662 WU? Makes me sad...


----------



## Hawk777th

How do you know what is a core 15 etc?


----------



## Anthony20022

Quote:


> Originally Posted by *Hawk777th*
> 
> How do you know what is a core 15 etc?


It's in FAHControl, labeled FahCore under Selected Work Unit.


----------



## bfromcolo

Fired up my 7850 for the first time since the CC. It sucks to have to give up a core to get 4k per day, versus 16k with the 17s. Totally not worth it. Tried 13.4 but its no better. Back to BOINC I guess with the GPU until we get some software that makes them worthwhile.


----------



## Henke1k

Thanks dude!


----------



## proteneer

ps don't worry core17 will be back


----------



## Krusher33

The wait is driving me nuts.


----------



## martinhal

Quote:


> Originally Posted by *Krusher33*
> 
> The wait is driving me nuts.


Add me to the list


----------



## Krusher33

Quote:


> Originally Posted by *martinhal*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> The wait is driving me nuts.
> 
> 
> 
> Add me to the list
Click to expand...

Krusher33
martinhal

done


----------



## tictoc

Quote:


> Originally Posted by *Krusher33*
> 
> Krusher33
> martinhal
> *tictoc*
> 
> done


----------



## Asustweaker

Quote:


> Originally Posted by *martinhal*
> 
> Add me to the list


Me too. My 480's were chugging along at a great 39k ish. I was able to overclock a decent amount higher with the core17's too. Once they come back, I'll break down my rig and try to diag why the cards are being stupid.


----------



## Krusher33

The waitlist is growing proteneer.

Krusher33
martinhal
tictoc
Asustweaker


----------



## WLL77

Add me as well


----------



## Asustweaker

OH.... While you're at it, make it work in Linux too
















hehe


----------



## Krusher33

Quote:


> Originally Posted by *WLL77*
> 
> Add me as well


The waitlist is growing proteneer.

Krusher33
martinhal
tictoc
Asustweaker
WLL77
Quote:


> Originally Posted by *Asustweaker*
> 
> OH.... While you're at it, make it work in Linux too
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> hehe


http://www.overclock.net/t/1365267/linux-gpu-core/0_50#post_19392340


----------



## Asustweaker

i already voted and posted in there. Just waiting for a stable and native linux gpu core.


----------



## Krusher33

Does Linux even have stable AMD drivers?


----------



## Asustweaker

eh. the last time i tried to use ati in linux, it was a disaster. No multiple monitor, driver crashes. Just a keyboard breaking mess.


----------



## Gungnir

Quote:


> Originally Posted by *Asustweaker*
> 
> eh. the last time i tried to use ati in linux, it was a disaster. No multiple monitor, driver crashes. Just a keyboard breaking mess.


When? I'm running my 7950 with Catalyst 13.1 on Chakra Linux right now; they're not quite as good as their Windows counterparts, but I'd say they're at least as good as the mid-2012 Win7 drivers (eg 12.6/12.7). Definately usable, at least, for both gaming and compute.


----------



## Asustweaker

I'm sure they are better now. But what about multiple monitor (extended, independent resolution) support??


----------



## Gungnir

Not sure about independent resolution monitors, but it seems to work fine with two 1080p monitors; just be sure to run the aticonfig command.


----------



## Krusher33

Quote:


> Originally Posted by *Gungnir*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Asustweaker*
> 
> eh. the last time i tried to use ati in linux, it was a disaster. No multiple monitor, driver crashes. Just a keyboard breaking mess.
> 
> 
> 
> When? I'm running my 7950 with Catalyst 13.1 on Chakra Linux right now; they're not quite as good as their Windows counterparts, but I'd say they're at least as good as the mid-2012 Win7 drivers (eg 12.6/12.7). Definately usable, at least, for both gaming and compute.
Click to expand...

That's good to hear. Like asustweaker, I had a hell of a time getting a Linux HTPC going with an AMD card. After research I saw that AMD wasn't getting much support so I scratched the idea.


----------



## jomama22

They have started internal testing of openmm 5.1 which will be included in a new core 17. That's the 2x PPD estimate core 17 that was announced back in march.


----------



## runs2far

Quote:


> Originally Posted by *jomama22*
> 
> They have started internal testing of openmm 5.1 which will be included in a new core 17. That's the 2x PPD estimate core 17 that was announced back in march.


Looking good









Getting core 16 units on a AMD card just doesn't generate enough PPD to make it worth the power usage.


----------



## strych9

I'm folding on a 5770 with cat 13.1 w/ sdk 2.7 on [email protected] 7.3.6, but haven't been able to get a core 17 work unit yet, am I doing anything wrong?


----------



## martinhal

Quote:


> Originally Posted by *strych9*
> 
> I'm folding on a 5770 with cat 13.1 w/ sdk 2.7 on [email protected] 7.3.6, but haven't been able to get a core 17 work unit yet, am I doing anything wrong?


core 17 are not in the wild at the moment. I miss my 45K ppd per card.....


----------



## Durquavian

Yeah.... I am just gonna post here so I can keep up with core 17 rerelease. these 16 are slow


----------



## Krusher33

Use the GPU on Einstein in BOINC instead till core 17's comes back. Help out the OCN team in the Pentathlon.









http://www.overclock.net/t/1371812/4th-boinc-pentathlon-may-5th-18th-2013-signup-form-is-up/0_50


----------



## PimpSkyline

Quote:


> Originally Posted by *Krusher33*
> 
> Use the GPU on Einstein in BOINC instead till core 17's comes back. Help out the OCN team in the Pentathlon.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.overclock.net/t/1371812/4th-boinc-pentathlon-may-5th-18th-2013-signup-form-is-up/0_50


that BOINC WU sucks on GPU, it's using 40% of my 580 and 100% of my CPU??? Thought it was a GPU WU??


----------



## Krusher33

Quote:


> Originally Posted by *PimpSkyline*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> Use the GPU on Einstein in BOINC instead till core 17's comes back. Help out the OCN team in the Pentathlon.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> http://www.overclock.net/t/1371812/4th-boinc-pentathlon-may-5th-18th-2013-signup-form-is-up/0_50
> 
> 
> 
> that BOINC WU sucks on GPU, it's using 40% of my 580 and 100% of my CPU??? Thought it was a GPU WU??
Click to expand...

They have tasks for both CPU's and GPU's.

Under the projects tab in the client, click the "Your Account" button. Then in the browser click the "[email protected] preferences". Click "Edit [email protected] preferences" at the bottom. Change it to NOT use the CPU and only use GPU.

Go back to client. You can either abort all the CPU tasks or let it complete them. Personally I just aborted them but that's up to you of course. The GPU ones are "Binary Radio Pulsar Search".

Lastly on the "Edit [email protected] preferences" page, the "GPU utilization factor of BRP apps" will determine how many GPU tasks to run at a time. 1 = 1 task, .5 = 2 tasks, .33 = 3 tasks, .25 = 4 tasks. I only saw my GPU % go from 75% to 85% to 90% then to 93%. I think the 7970's can handle the 4 tasks at 1 time quite well because I only saw the time it takes go up by a couple of minutes. I don't know about other cards though.

Do remember that each of the GPU tasks also uses a bit of CPU, so you may need to reserve a core as well.


----------



## goodtobeking

Quote:


> Originally Posted by *Krusher33*
> 
> They have tasks for both CPU's and GPU's.
> 
> Under the projects tab in the client, click the "Your Account" button. Then in the browser click the "[email protected] preferences". Click "Edit [email protected] preferences" at the bottom. Change it to NOT use the CPU and only use GPU.
> 
> Go back to client. You can either abort all the CPU tasks or let it complete them. Personally I just aborted them but that's up to you of course. The GPU ones are "Binary Radio Pulsar Search".
> 
> Lastly on the "Edit [email protected] preferences" page, the "GPU utilization factor of BRP apps" will determine how many GPU tasks to run at a time. 1 = 1 task, .5 = 2 tasks, .33 = 3 tasks, .25 = 4 tasks. I only saw my GPU % go from 75% to 85% to 90% then to 93%. I think the 7970's can handle the 4 tasks at 1 time quite well because I only saw the time it takes go up by a couple of minutes. I don't know about other cards though.
> 
> Do remember that each of the GPU tasks also uses a bit of CPU, *so you may need to reserve a core as well*.


This is all true. But remember [email protected] will automaticly reserve .5 CPU per WU. So if you run 2x6970s each running 2xWUs, like me, it will reserve two threads by itself.


----------



## ZDngrfld

New Core 17 units are out there now. My GTX 670 seems to be pulling ~63k PPD at stock clocks on a P7663 according to FAHControl.


----------



## Krusher33

Quote:


> Originally Posted by *ZDngrfld*
> 
> New Core 17 units are out there now. My GTX 670 seems to be pulling ~63k PPD at stock clocks on a P7663 according to FAHControl.


Thanks for the heads up!

Core "Zeta"?


----------



## Wheezo

How's the CPU usage on the new ones? Same as before? I grabbed a BETA but it's using a 1 thread. I downgraded drivers for BOINC so not sure if that would cause the problem. On modified 13.2 right now.


----------



## ZDngrfld

Quote:


> Originally Posted by *Wheezo*
> 
> How's the CPU usage on the new ones? Same as before? I grabbed a BETA but it's using a 1 thread. I downgraded drivers for BOINC so not sure if that would cause the problem. On modified 13.2 right now.


I'm seeing ~8% usage with my GTX 670. Guess we'll have to wait and see what the usage is like on AMD cards.


----------



## Anthony20022

Quote:


> Originally Posted by *ZDngrfld*
> 
> New Core 17 units are out there now. My GTX 670 seems to be pulling ~63k PPD at stock clocks on a P7663 according to FAHControl.


Just turned mine on and got one. My 7950 @1Ghz is getting 87K according to FAHControl, 80K according to HFM.


----------



## nova4005

my 7970 @1100 is at 104k according to FAHControl.


----------



## Gungnir

Quote:


> Originally Posted by *ZDngrfld*
> 
> I'm seeing ~8% usage with my GTX 670. Guess we'll have to wait and see what the usage is like on AMD cards.


I'm seeing ~1-2% CPU on my 7950 with 13.5b2. No hit in performance running these with 3 threads of SIMAP and one thread of WCG.

EDIT: Occasionally jumps up to ~22% for a couple seconds, but goes back down to practically nothing afterwards.


----------



## Krusher33

Quote:


> Originally Posted by *Wheezo*
> 
> How's the CPU usage on the new ones? Same as before? I grabbed a BETA but it's using a 1 thread. I downgraded drivers for BOINC so not sure if that would cause the problem. On modified 13.2 right now.


I'm on 13.4 normal driver. I'm seeing CPU usage bouncing from 0 to 19%. GPU usage at 99% with a drop to the 40's for a second intermittently.

I'm not sure how I'm going to BOINC on the CPU at the same time. I had BOINC going but GPU was haywire till I turned it off.


----------



## nova4005

Quote:


> Originally Posted by *Krusher33*
> 
> I'm on 13.4 normal driver. I'm seeing CPU usage bouncing from 0 to 19%. GPU usage at 99% with a drop to the 40's for a second intermittently.
> 
> I'm not sure how I'm going to BOINC on the CPU at the same time. I had BOINC going but GPU was haywire till I turned it off.


I am running 6 threads of simap on my 3770k while folding on my 7970 and 7950, which puts cpu usage at 90% load. This has my gpus loaded from 95-99%.


----------



## Anthony20022

Quote:


> Originally Posted by *Krusher33*
> 
> I'm on 13.4 normal driver. I'm seeing CPU usage bouncing from 0 to 19%. GPU usage at 99% with a drop to the 40's for a second intermittently.
> 
> I'm not sure how I'm going to BOINC on the CPU at the same time. I had BOINC going but GPU was haywire till I turned it off.


It should work fine if you reduce the number of BOINC tasks so there are some spare cycles for the GPU. I'm running 7 BOINC tasks on my CPU and Folding on my GPU right now.


----------



## Krusher33

Quote:


> Originally Posted by *nova4005*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> I'm on 13.4 normal driver. I'm seeing CPU usage bouncing from 0 to 19%. GPU usage at 99% with a drop to the 40's for a second intermittently.
> 
> I'm not sure how I'm going to BOINC on the CPU at the same time. I had BOINC going but GPU was haywire till I turned it off.
> 
> 
> 
> I am running 6 threads of simap on my 3770k while folding on my 7970 and 7950, which puts cpu usage at 90% load. This has my gpus loaded from 95-99%.
Click to expand...

Quote:


> Originally Posted by *Anthony20022*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> I'm on 13.4 normal driver. I'm seeing CPU usage bouncing from 0 to 19%. GPU usage at 99% with a drop to the 40's for a second intermittently.
> 
> I'm not sure how I'm going to BOINC on the CPU at the same time. I had BOINC going but GPU was haywire till I turned it off.
> 
> 
> 
> It should work fine if you reduce the number of BOINC tasks so there are some spare cycles for the GPU. I'm running 7 BOINC tasks on my CPU and Folding on my GPU right now.
Click to expand...

I'm trying to figure out how the heck to do that.


----------



## arvidab

:eeek:


----------



## Anthony20022

Quote:


> Originally Posted by *Krusher33*
> 
> I'm trying to figure out how the heck to do that.


Go to Tools -> Computing Preferences and reduce the number in the box "On multiprocessor systems, use at most ___% of the processors."


----------



## Wheezo

Quote:


> Originally Posted by *ZDngrfld*
> 
> I'm seeing ~8% usage with my GTX 670. Guess we'll have to wait and see what the usage is like on AMD cards.


Yup, just clean installed 13.5 Betas on AMDs website and seeing about 3 - 6% usege on the process with HD7870.


----------



## Gungnir

I want to run Einstein, but >81k PPD is hard to pass up...


----------



## Krusher33

Quote:


> Originally Posted by *Anthony20022*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> I'm trying to figure out how the heck to do that.
> 
> 
> 
> Go to Tools -> Computing Preferences and reduce the number in the box "On multiprocessor systems, use at most ___% of the processors."
Click to expand...

I just tried that. I set it to 87% and now I see my GPU usage bouncing from 95-98% when it was a solid 98% before. Nevermind. It just took time before it became solid again.


----------



## Wheezo

...

I'm waiting for it to get to "normal" values, but it hasn't yet...

7870:


----------



## Krusher33

My rig working pretty good. I'm excited! (slightly, will be stopping for Einstein focus in the BOINC Pent).


----------



## [CyGnus]

My 7870 is getting 60k PPD on this new ZETA core







P7663 finally AMD gets some love


----------



## nova4005

Quote:


> Originally Posted by *Krusher33*
> 
> My rig working pretty good. I'm excited! (slightly, will be stopping for Einstein focus in the BOINC Pent).


Looking good Krusher!









I think once my 7950 finishes a core 16 and hopefully get a core 17, I will be at 200k ppd! These core 17's are sweet!!


----------



## Krusher33

I'll be upping my clocks some more when I get my waterblock tomorrow.


----------



## Durquavian

HOLY CRAP. These new WUs are really new then aren't they. My crappy 7770 is getting 27k ppd


----------



## jomama22

3 7970s getting 120-130k PPD each @ 1250/1800 on 2 and 1320/1800 on 1.

Knew these were coming soon. To bad they weren't here for chimp challenge.


----------



## dylwing23

Little confused here. Just started folding today and got a 7663 wu on my 7950. Gpu usage is hovering at 98% but the control thing shows no change after like 3 hours. Just says running and at 0%. Here's a pic.


nvm fixed figured out problem


----------



## PimpSkyline

Quote:


> Originally Posted by *Gungnir*
> 
> I want to run Einstein, but >81k PPD is hard to pass up...


Nuff Said...


----------



## bfromcolo

Wow stock 7850 39k! Guess I'll knock of few of these out before BOINC starts doing [email protected] tomorrow.


----------



## DizZz

Did you guys see this?!? The future of folding


----------



## ZDngrfld

Quote:


> Originally Posted by *DizZz*
> 
> Did you guys see this?!? The future of folding


That's me


----------



## Krusher33

Holy freaking sh...


----------



## gboeds

very underwhelmed with these on my GTX460s, though.....while they are slightly better than the previous core 17 version, only bumped from about 15k to about 18.5k ppd.....still slightly less than they get folding 807* on core 15....will fire up the 480s when I go to bed tonight and see what they do on those....

edit: any other NVIDIA folders having problems with the client crashing when trying to run zeta core on 266.58 drivers? Had to update my drivers to get these to run....


----------



## valvehead

Quote:


> Originally Posted by *gboeds*
> 
> edit: any other NVIDIA folders having problems with the client crashing when trying to run zeta core on 266.58 drivers? Had to update my drivers to get these to run....


Yep. I don't have time to update my driver right now, so I had to drop the beta flag. Maybe tomorrow.


----------



## martinhal

How long will this last. Last round I was getting 40K ppd per card 170 -180 ppd for the rig and got to 4 mil in no time then they where gone and my poor 3770 had to go 24/7 to get 30 -40 ppd.

Let the rig run when I left for work this morning was showing around 400 K ppd Its now messing with the big rigs It seems kind of unfair , the 3770 will get 20 K for 15 hours work on a wu and the 7970's will get 10 K each for two hours work .

Sadly my wife is going to shut it down as she needs to do some work....


----------



## ericeod

My 7970 OCed to 1125MHz is getting ~110k PPD and my 3930k at 4.4GHz is getting ~60K PPD. This is huge improvement from last week where my combined points dropped below 35K PPD...


----------



## Hukkel

Guys how do I add the Beta flag in Windows 7 [email protected]?


----------



## ericeod

Quote:


> Originally Posted by *Hukkel*
> 
> Guys how do I add the Beta flag in Windows 7 [email protected]?


Here you go:
Quote:


> Originally Posted by *tictoc*
> 
> To enter the beta flag in the correct place follow this procedure:
> 
> Configure ---> Slots ---> gpu ---> Edit
> 
> Scroll down to the bottom of the folding slot configuration, and below "Extra slot options" click add, and then enter the beta flag.


----------



## Hukkel

Thank you very much!!!


----------



## ericeod

Quote:


> Originally Posted by *Hukkel*
> 
> Thank you very much!!! +rep


tictoc deserves it, not me:
http://www.overclock.net/t/1323729/updated-amd-gpu-folding-on-12-11-beta-drivers-and-13-1-whql-drivers/330#post_19745005


----------



## Hukkel

I gave it to him then, but thanks for the link. My searches weren't working.


----------



## ericeod

Quote:


> Originally Posted by *Hukkel*
> 
> I gave it to him then, but thanks for the link. My searches weren't working.


You probably would not have found it by searching. I happened to have the same question a few weeks ago (during the Chimp Challenge), and he replied to my question.


----------



## Caleal

I guess I need to start looking at AMD cards.

My GTX580 is only making ~56k ppd @1000 Mhz, and I'm having to use nVidia's newer drivers, that won't let me clock the card past 1000 Mhz.


----------



## Hukkel

This is the situation right now. Who knows how it will be in 2 or 3 months time.


----------



## Doc_Gonzo

The CC and the Core 17 work units got my interest. Glad to hear that they are back!
I'll put a GPU on it after the Boinc Pentathlon finishes


----------



## labnjab

Decided to suspend Einstein at home for a few hours on my main rig so I can run one unit of core 17 before going back to [email protected] I'm getting 41k ppd on each of my 570s at 875 mhz. That's around a 4k ppd increase per card over the last run of core 17.

Ill let my 670 ftw run a core 17 later this afternoon and see how it does.


----------



## Krusher33

Quote:


> Originally Posted by *Caleal*
> 
> I guess I need to start looking at AMD cards.
> 
> My GTX580 is only making ~56k ppd @1000 Mhz, and I'm having to use nVidia's newer drivers, that won't let me clock the card past 1000 Mhz.


Quote:


> Originally Posted by *labnjab*
> 
> Decided to suspend Einstein at home for a few hours on my main rig so I can run one unit of core 17 before going back to [email protected] I'm getting 41k ppd on each of my 570s at 875 mhz. That's around a 4k ppd increase per card over the last run of core 17.
> 
> Ill let my 670 ftw run a core 17 later this afternoon and see how it does.


Somewhere I saw someone say their Titans were getting a good 110k PPD. That is if you want to stick to the green team.


----------



## ericeod

Quote:


> Originally Posted by *Krusher33*
> 
> Somewhere I saw someone say their Titans were getting a good 110k PPD. That is if you want to stick to the green team.


That's about what the 7970 is getting.

http://s83.photobucket.com/user/ericeod/media/7970core17wu_zps6229ab0d.jpg.html


----------



## Krusher33

Yeah mine too at 1100mhz. I'm getting a waterblock tonight and hopefully will have it installed. I probably won't getting around to pushing it till tomorrow though. And at that time I'll be cruching [email protected]


----------



## twerk

Guys, is the core 17 WU out of beta? I've been folding for a week or so at about 50K-55K PPD with my sig rig, but today my PPD is over 100K?


----------



## cam51037

Quote:


> Originally Posted by *AndyM95*
> 
> Guys, is the core 17 WU out of beta? I've been folding for a week or so at about 50K-55K PPD with my sig rig, but today my PPD is over 100K?


I'm not at my PC right now, but I don't think it is yet, my 670 only is dropping 3.8k units or 14k projects, not the 10k Core 17 ones yet, and it doesn't have the beta flag on.


----------



## Outcasst

Confused as to why I'm not getting the bonus points from these.

Do you need to have completed ten of these Core 17 WU's before you get them or is it linked to the 10 SMP WU's?


----------



## Hukkel

My gtx670 @ 1070mhz is doing 63k PPD with the new beta.


----------



## cam51037

Quote:


> Originally Posted by *Hukkel*
> 
> My gtx670 @ 1070mhz is doing 63k PPD with the new beta.












Hopefully my 670 @ 1250 MHz can get some serious points then.


----------



## twerk

Quote:


> Originally Posted by *Hukkel*
> 
> My gtx670 @ 1070mhz is doing 63k PPD with the new beta.


My 680 @ 1.3GHz was doing 85K before








Just started a new WU and I'm down to 65K now though


----------



## Bal3Wolf

These are monsters for 7xxx cards my [email protected] are getting 120k ppd each.


----------



## Krusher33

Quote:


> Originally Posted by *Bal3Wolf*
> 
> These are monsters for 7xxx cards my [email protected] are getting 120k ppd each.


Are you clocked at 1150mhz? I'm only getting 110k PPD at 1150mhz.


----------



## $ilent

do we need to set any flags to get these high ppd units?


----------



## ZDngrfld

Quote:


> Originally Posted by *$ilent*
> 
> do we need to set any flags to get these high ppd units?


client-type beta


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> Are you clocked at 1150mhz? I'm only getting 110k PPD at 1150mhz.


1155/1650 per card.seeing solid 98% usage using 13.5 betas and funny thing my rads are putting out cold air im not even hitting 40c on either card the are at 37c right now folding for over 2hrs.


----------



## Krusher33

Yeah. Folding for me is 10c colder than 4 tasks of [email protected] I was a bit shocked.

I'm going to try upping my memory. I'm still on stock on that. What voltage on the memory?


----------



## Krusher33

Quote:


> Originally Posted by *kyfire*
> 
> Found this on another forum. Thought I'd repost it here. BTW this is from 5 May 2013
> 
> 
> 
> 
> 
> 
> A live Q&A is available on reddit.
> 
> Some of the key highlights are:
> -Up to 120,000 PPD on GTX Titan, and 110,000 PPD on HD 7970
> -Support for more diverse simulations
> -Linux support on NVIDIA cards and 64bit OSes
> -FAHBench updated to use the latest OpenMM and display version information
> 
> Full Transcript of the Talk:
> 
> Hi I'm Yutong, I'm a GPU core developer here at [email protected] Today I want to give you guys an update on what we've been working on over the past few months. Let's take a look at the three major components of GPU core development. First off, we have OpenMM, our open source library for MD simulations. It's used by both FAHBench and Core17. FAHBench is our official benchmarking tool for GPUs, and it supports all OpenCL compatible devices. We're very happy to tell you guys that it's been recently added to Anandtech's GPU test suite. And Core17 is what your [email protected] clients use to do science. By the way, all those arrows just mean that the entire development process is interconnected.
> 
> So let's take a step back in time.
> 
> Last year in October, we conceived Core 17. And we had three major goals in mind. We wanted a core that was going to be faster, more stable, and to be able to support more types of simulations than just implicit solvent. But because of how our old core 15 and 16 was written, it was in fact easier for us to write the core from scratch.
> 
> So in November, we started rewriting some of the key parts to replace some pre-existing functionality. Over two months, in January, things started to come together. Our work server, assignment server, and client was modified to support Core 17. We also started an internal test team, for the first time ever, using an IRC channel on freenode to provide real-time testing feedback.
> 
> In February, Core17 had a public Beta of over 1000 GPUs. And We learnt a lot of valuable things. One of them was that the core wasn't all that much faster it seems on NVIDIA. Though on AMD things certainly looked brighter. Things still crashed occasionally, and bugs were certainly still present. So we went back to the drawing board to improve the core.
> 
> In April, we added a lot of new optimizations and bug fixes to OpenMM. We tested a linux core for the first time ever on GPUs. And our internal testing team had grown to over 30 people. And that brings us to today.
> 
> We now support many more types of simulations, ranging from explicit solvent to large systems of up to 100,000 atoms. We improved the stability of our cores. We now have a sustainable code base. We added support for linux for the first time. It's also really fast - so I'm sure the burning question on your mind is, just how fast is it? Well let's take a look. On the GTX Titan we saw it from 50,000 points per day to over 120,000 points per day. On the GTX 680, we saw it go from 30,000 points per day to over 80,000 points per day. On the AMD HD 7970, we saw it from 10,000 points per day to over 110,000 points per day. On the AMD HD 7870 we saw it jump from 5,000 points per day to over 50,000 points per day.
> 
> We never want to rest on our laurels for too long. We are already planning support for more Intel devices in the future, such as the i7s, integrated graphics cards, and Xeon Phis. We plan to add more projects to [email protected] as time goes on, so researchers within our group can investigate more systems of interest. And as always, we want things to be faster.
> 
> Now let's go back to the beginning again, and here's you guys can help us. If you're a programmer, we invite you to contribute to the open source OpenMM project (available on github at the end of the month on github.com/simtk/openmm). If you're an enthusiast and like to build state-of the-art computers, we encourage you to run FAHBench and join our internal testing team on freenode. If you're a donor, we'd like you guys to help us spread the word about [email protected] and bring more people, and their machines of course. Now before I wrap things up, there are some people I'd like to thank. Our internal testers are on the right hand side, and they've been instrumental in providing me with real time feedback regarding our tests. We couldn't have done it this fast without them. On the left hand side, are people within the Pande Group, Joseph and Peter are also programmers like me. Diwakar and TJ helped set up many of our projects. Christian and Robert have always been there for support and feedback.
> 
> But wait, one last thing. This week, I'll be doing a Questions and Answers session on reddit at reddit.com/r/folding. So if you've got questions, come drop by and hang out with us. Thanks, and bye-bye.


----------



## labnjab

My gtx 670 FTW at the factory boost of 1150 mhz is getting 68k ppd







My 2 570 classifieds at 875 are only getting 41k ppd each. Looks like it may be time to upgrade the 570s to 670s if the core stays like this


----------



## arvidab

Hmm, saw a ASUS 7970 Matrix for sale earlier...

Quote:


> Originally Posted by *AndyM95*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Hukkel*
> 
> My gtx670 @ 1070mhz is doing 63k PPD with the new beta.
> 
> 
> 
> My 680 @ 1.3GHz was doing 85K before
> 
> 
> 
> 
> 
> 
> 
> 
> Just started a new WU and I'm down to 65K now though
Click to expand...

Quote:


> Originally Posted by *Caleal*
> 
> I guess I need to start looking at AMD cards.
> 
> My GTX580 is only making ~56k ppd @1000 Mhz, and I'm having to use nVidia's newer drivers, that won't let me clock the card past 1000 Mhz.


Whad'ya know, Kepler is now doing better than Fermi again.









Quote:


> Originally Posted by *Krusher33*
> 
> Somewhere I saw someone say their Titans were getting a good 110k PPD. That is if you want to stick to the green team.


Too bad they are over two times the price of 7970.









Quote:


> Originally Posted by *labnjab*
> 
> Decided to suspend Einstein at home for a few hours on my main rig so I can run one unit of core 17 before going back to [email protected] I'm getting 41k ppd on each of my 570s at 875 mhz. That's around a 4k ppd increase per card over the last run of core 17.
> 
> Ill let my 670 ftw run a core 17 later this afternoon and see how it does.


Tsk, tsk, tsk, as a current member of Laundromatic, I order you to quit folding and go back to BOINC immediately!


















Quote:


> Originally Posted by *Outcasst*
> 
> Confused as to why I'm not getting the bonus points from these.
> 
> Do you need to have completed ten of these Core 17 WU's before you get them or is it linked to the
> 10 SMP WU's?


Do you have a bonus eligible passkey in place?


----------



## labnjab

Quote:


> Originally Posted by *arvidab*
> 
> Hmm, saw a ASUS 7970 Matrix for sale earlier...
> 
> Whad'ya know, Kepler is now doing better than Fermi again.
> 
> 
> 
> 
> 
> 
> 
> 
> Too bad they are over two times the price of 7970.
> 
> 
> 
> 
> 
> 
> 
> 
> Tsk, tsk, tsk, as a current member of Laundromatic, I order you to quit folding and go back to BOINC immediately!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Do you have a bonus eligible passkey in place?










My 570s are already back to Boinc. My 670 will be back after it finishes its current unit. Come on, I just had to see what they would do on core 17. I couldn't wait until after the 13th







. But now they will be nonstop till after the 13th then back to heavy folding


----------



## DizZz

Both my 7970s are getting about 125k PPD each @ 1290mhz


----------



## Shogon

My 690 at 1150 core is netting just over 150k PPD, GTX 580 at 928 core is around 53k.


----------



## [CyGnus]

7870 @ 1200MHz 65k PPD


----------



## cam51037

Arg! I have to wait until tomorrow to get my 670 on the beta flag, and then set up my 7850 as well.









Hopefully I can clock my 670 to around 1270 MHz, I think it's stable around there, and my 7850 to an easy 1050, maybe I'll see if I can do some more ocing on it as well.


----------



## jomama22

getting 130k-136k now per 7970



440k-450k total ppd per day


----------



## Shogon

Hehe now I want a 7970 for folding!


----------



## Hukkel

Does anyone know how much PPD the 7870XT gets?


----------



## [CyGnus]

I would say something around 75/80k since i got 65k out of my 7870


----------



## Hukkel

Basically a 200 euro 7870XT will get the same PPD as a twice as expensive (at least) GTX680?
Sounds like the best folder price/performance. Or almost the same as a HD7970 which costs 330 euros and does 120k. Which is about correct. 50% increase in price and also 50% increase in PPD.


----------



## Avonosac

My titan is loving this, I'm getting anywhere from 150k-165k readings at 1150mhz!

SWEETNESS!


----------



## PandaSPUR

My 7970 at stock speeds getting 110-120k PPD









Glad I went with AMD during this past upgrade.. lol
Much better for bitcoin mining too >.>

Now... how to make my 3570k more useful... (15k PPD =.=)


----------



## Outcasst

Quote:


> Originally Posted by *arvidab*
> 
> Do you have a bonus eligible passkey in place?


I have a passkey, not sure how to tell if it's bonus eligible or not.


----------



## Krusher33

He meant did you fold 10 units on that specific passkey yet?


----------



## Outcasst

Not SMP units, no. Done about 30 GPU ones


----------



## strych9

Getting 42k ppd with a single 5770 yay!


----------



## goodtobeking

Quote:


> Originally Posted by *PandaSPUR*
> 
> My 7970 at stock speeds getting 110-120k PPD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Glad I went with AMD during this past upgrade.. lol
> Much better for bitcoin mining too >.>
> 
> *Now... how to make my 3570k more useful...* (15k PPD =.=)


Pentathlon









After [email protected] is over, Im going to switch over try some of these out. The numbers sound impressive

EDIT: I got 2x6970s and a 7970, should get some good points


----------



## Avonosac

Quote:


> Originally Posted by *goodtobeking*
> 
> Pentathlon
> 
> 
> 
> 
> 
> 
> 
> 
> 
> After [email protected] is over, Im going to switch over try some of these out. The numbers sound impressive
> 
> EDIT: I got 2x6970s and a 7970, should get some good points


Seriously... I'm still trying to get some time to get home and set up BOINC. I haven't seen my PC in 9 days


----------



## PandaSPUR

Quote:


> Originally Posted by *goodtobeking*
> 
> Pentathlon
> 
> 
> 
> 
> 
> 
> 
> 
> 
> After [email protected] is over, Im going to switch over try some of these out. The numbers sound impressive
> 
> EDIT: I got 2x6970s and a 7970, should get some good points


Slightly off topic, but wouldn't I be in the same situation with BOINC?
Great performance and PPD from my GPU, but comparatively useless PPD from my CPU lol.


----------



## Starbomba

Quote:


> Originally Posted by *PandaSPUR*
> 
> Great performance and PPD from my GPU, but comparatively useless PPD from my CPU lol.


At least a powerful CPU like a 3930k, or any 2p/4p rig is worthwhile, for BOINC it ain't so. Heck, even then the differences between CPU and GPU in BOINC are abysmal, more so than [email protected] My 7970 can make 1m+ PPD, my 2600k is lucky to make 20k PPD









I need to get my 2m points, after the Pentathlon i got to "cool down" my rigs on Folding


----------



## Avonosac

Two million!


----------



## nova4005

Quote:


> Originally Posted by *Starbomba*
> 
> At least a powerful CPU like a 3930k, or any 2p/4p rig is worthwhile, for BOINC it ain't so. Heck, even then the differences between CPU and GPU in BOINC are abysmal, more so than [email protected] My 7970 can make 1m+ PPD, my 2600k is lucky to make 20k PPD
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I need to get my 2m points, after the Pentathlon i got to "cool down" my rigs on Folding


You are right about boinc being geared more towards gpu power for bigger ppd production. I would love an intel 2p to stick on cpu projects like primaboinca, rosetta, and a few other I like. I mean 32 threads added to my arsenal would increase work done dramatically, which is why I like the cpu projects anyway. They seem to be more towards the things I like donating time to. I think the best combination is several gpu's and a multiprocessor system!


----------



## goodtobeking

Quote:


> Originally Posted by *PandaSPUR*
> 
> Slightly off topic, but wouldn't I be in the same situation with BOINC?
> Great performance and PPD from my GPU, but comparatively useless PPD from my CPU lol.


Yeah if your aim is just for total points. most projects are cpu only, and the only way to get a lot of points is todevote your cpu cycles. like [email protected] home, took me forever to reach 1 million points, but no one else can reach that unless they donate some insane amount of cpu cycles as well. No gpu short cut.

but that's not what I was talking about really. you said you want to find A good use for your cpu. the BOINC Pentathlon would be a good use for it at least during the event. OCN could use some help. We are 9th overall now


----------



## strych9

So my 5770 was folding with around 42k ppd for about 30 mins after I got a core17 wu, but the ppd later decreased to 7k, which is the normal ppd with all other work units. I tried uninstalling the client (including the data) and reinstall it but the ppd is stuck at 7k. I'm using catalyst 13.1 (unmodified) and the gpu usage is 97-99% on afterburner. Can anybody help?


----------



## gboeds

Quote:


> Originally Posted by *strych9*
> 
> So my 5770 was folding with around 42k ppd for about 30 mins after I got a core17 wu, but the ppd later decreased to 7k, which is the normal ppd with all other work units. I tried uninstalling the client (including the data) and reinstall it but the ppd is stuck at 7k. I'm using catalyst 13.1 (unmodified) and the gpu usage is 97-99% on afterburner. Can anybody help?


I doubt there is anything wrong. The core 17 units only give a big boost to newer, more powerful GPUs, my GTX 460s get a little worse on them than on core 15 units.

The 42kppd you were seeing is probably just the FAH Control being glitchy....earlier my GTX 480s were reading 384k ppd each, wish that were real!


----------



## [CyGnus]

strych9 install cat 13.5b2 delete the wu and let it get another one and let it be to see the results


----------



## strych9

Quote:


> Originally Posted by *[CyGnus]*
> 
> strych9 install cat 13.5b2 delete the wu and let it get another one and let it be to see the results


Tried but no luck


----------



## Durquavian

My 7770 get 28k on 17 4-6k on 16. I am now on 13.5b2


----------



## Bal3Wolf

nice ppd on these but they dont seem as stable as the other betas iv had them act up twice just recently they said 99% and had stoped doing anything log said 51% tho so i had to kill folding and all the cores to get it to resume the work.


----------



## lacrossewacker

getting some 70,000ppd from my GTX 670 



EDIT: Getting 77k PPD now


----------



## TheBadBull

Quote:


> Originally Posted by *strych9*
> 
> Quote:
> 
> 
> 
> Originally Posted by *[CyGnus]*
> 
> strych9 install cat 13.5b2 delete the wu and let it get another one and let it be to see the results
> 
> 
> 
> Tried but no luck
Click to expand...

I'm sitting at ~7k as well.

When you posted the 40k pic I said "what the.... what"


----------



## labnjab

Quote:


> Originally Posted by *lacrossewacker*
> 
> getting some 70,000ppd from my GTX 670
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: Getting 77k PPD now


What clock is it at? Its been a few days since I folded on my 670 but I was getting 67k ish ppd at 1150 mhz


----------



## Fieldsweeper

Quote:


> Originally Posted by *lacrossewacker*
> 
> getting some 70,000ppd from my GTX 670
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> EDIT: Getting 77k PPD now


my titans doubling that


----------



## NBrock

Man these new core 17 units are great. I was originally getting around 55k ppd on the core 17 with my 7970 and now I am getting a solid 129k ppd. (last 2 and a half days). I need a second 7970 STAT!!!!


----------



## Fieldsweeper

my other computer keeps getting a fail.

amd hd7450 card.

cant figure out why configd it the same as the other computer newest client


----------



## Scorpion49

Quote:


> Originally Posted by *Fieldsweeper*
> 
> my titans doubling that


Are you running a 7663? I have one going on my Titan now and I'm getting 52K ppd at a TPF of 1:09. Something seems to be wrong.


----------



## Fieldsweeper

no lol im running a gtx titan and getting like 140-150K ppd


----------



## Scorpion49

Quote:


> Originally Posted by *Fieldsweeper*
> 
> no lol im running a gtx titan and getting like 140-150K ppd


So yes you are running a 7663 and yes you are getting 2.5x the PPD I am at the same clock speed. We are both also on windows 8 so I'm wondering what the heck is up with my points.


----------



## Krusher33

Quote:


> Originally Posted by *NBrock*
> 
> Man these new core 17 units are great. I was originally getting around 55k ppd on the core 17 with my 7970 and now I am getting a solid 129k ppd. (last 2 and a half days). I need a second 7970 STAT!!!!


What clocks you on?


----------



## PR-Imagery

80k ppd total on my 580s on 7663; so 40k a each, is that right?


----------



## Bal3Wolf

Quote:


> Originally Posted by *Scorpion49*
> 
> So yes you are running a 7663 and yes you are getting 2.5x the PPD I am at the same clock speed. We are both also on windows 8 so I'm wondering what the heck is up with my points.


well he is counting his credit from his cpu also he has 12 cores but your 55k does look low maybe drivers.


----------



## AndyE

Finished my system today. Moved the Titan's to a DualXeon system.

After enabling the -beta flag, all GPUs got as their next WU 7663. With decent OC (to keep the temperature below 65 C), current estimate is 130k-140k PPD / GPU. 150k PPD / GPU seems possible. Total PPD is currently at 640k. I didn't have time today to start and configure a VM in Hyper-V to move the CPU cores into the VM and enable -bigadv there. Might get another 100-200k PPD with this but have to setup the VM first and migrate then the CPU WUs to the VM. If I have time, I'll do it over the weekend.










I still have the ATI 7970. Due their suboptimal cooling structure, can't put 4 of them in one case. Thinking about to split them into 2 smaller systems with entry level CPUs just to feed them and 2 PCI slots so far apart, that each of the 2 Gigabyte 7970 GE can get enough fresh air. In tight arrangements they collapsed under heavy load.

The "Green" system


----------



## DizZz

Quote:


> Originally Posted by *PR-Imagery*
> 
> 80k ppd total on my 580s on 7663; so 40k a each, is that right?


Seems a little low. I'll throw my 580s in tomorrow and see what they get and let you know


----------



## NBrock

Quote:


> Originally Posted by *Krusher33*
> 
> What clocks you on?


Right now 1225 core and 1500 mem. It didn't really seem to change too much from 1100 core. The peak I have seen so far since i turned the core back up to 1225 was 145k ppd and before that at 1100 it was 110-125k


----------



## NBrock

Quote:


> Originally Posted by *AndyE*
> 
> Finished my system today. Moved the Titan's to a DualXeon system.
> 
> After enabling the -beta flag, all GPUs got as their next WU 7663. With decent OC (to keep the temperature below 65 C), current estimate is 130k-140k PPD / GPU. 150k PPD / GPU seems possible. Total PPD is currently at 640k. I didn't have time today to start and configure a VM in Hyper-V to move the CPU cores into the VM and enable -bigadv there. Might get another 100-200k PPD with this but have to setup the VM first and migrate then the CPU WUs to the VM. If I have time, I'll do it over the weekend.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I still have the ATI 7970. Due their suboptimal cooling structure, can't put 4 of them in one case. Thinking about to split them into 2 smaller systems with entry level CPUs just to feed them and 2 PCI slots so far apart, that each of the 2 Gigabyte 7970 GE can get enough fresh air. In tight arrangements they collapsed under heavy load.
> 
> The "Green" system


That is SICK!!!


----------



## DizZz

Quote:


> Originally Posted by *AndyE*
> 
> Finished my system today. Moved the Titan's to a DualXeon system.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> The "Green" system


That's so nice! You should get those titans wet


----------



## Krusher33

Quote:


> Originally Posted by *NBrock*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Krusher33*
> 
> What clocks you on?
> 
> 
> 
> Right now 1225 core and 1500 mem. It didn't really seem to change too much from 1100 core. The peak I have seen so far since i turned the core back up to 1225 was 145k ppd and before that at 1100 it was 110-125k
Click to expand...

Ok thanks. As soon as the Einstein portion is done in the BOINC event, I'm switching over to [email protected] and upping the clocks. Apparently Einstein don't like overclocks much and so I'm sitting at stock clocks. Very frustrating.


----------



## NBrock

Quote:


> Originally Posted by *Krusher33*
> 
> Ok thanks. As soon as the Einstein portion is done in the BOINC event, I'm switching over to [email protected] and upping the clocks. Apparently Einstein don't like overclocks much and so I'm sitting at stock clocks. Very frustrating.


The highest I have been folding stable 24/7 was 1300 core and 1625 mem. I would like to turn it up more but I will wait til this unit is done...just in case. I don't want to get an error.


----------



## lacrossewacker

Quote:


> Originally Posted by *labnjab*
> 
> What clock is it at? Its been a few days since I folded on my 670 but I was getting 67k ish ppd at 1150 mhz


It's at 1267mhz but only around 84% usage. When it's at 100% usage I can't keep the heat down enough to keep it from downclocking another 13 mhz because of my ambient temps. Oh well


----------



## Avonosac

Quote:


> Originally Posted by *Fieldsweeper*
> 
> my titans doubling that


I'm rocking 160k on my titan at 1150mhz.. It is pretty damn nice


----------



## labnjab

Quote:


> Originally Posted by *PR-Imagery*
> 
> 80k ppd total on my 580s on 7663; so 40k a each, is that right?


That seems low for 580s. My gtx 570 classifieds at 875 get 41k ppd each

Quote:


> Originally Posted by *lacrossewacker*
> 
> It's at 1267mhz but only around 84% usage. When it's at 100% usage I can't keep the heat down enough to keep it from downclocking another 13 mhz because of my ambient temps. Oh well


I just got my 670 FTW Monday and haven't had a chance to play around with overclocking it. Its at 1150 now which is the factory boost clock. At what temps do 670s usually throttle back? I'm running [email protected] on it now which brings it up to 90-95% and only 50C temps


----------



## Shogon

Quote:


> Originally Posted by *PR-Imagery*
> 
> 80k ppd total on my 580s on 7663; so 40k a each, is that right?


Are you stock or overclocked? My 580 at 927 MHz is around 54k PPD.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Avonosac*
> 
> I'm rocking 160k on my titan at 1150mhz.. It is pretty damn nice


nice but 2 7970s can rock 250K







j/k these new cores really are nice.


----------



## martinhal

Quote:


> Originally Posted by *Bal3Wolf*
> 
> nice but 2 7970s can rock 250K
> 
> 
> 
> 
> 
> 
> 
> j/k these new cores really are nice.


And my three can do 375 ppd for the price of a titan







but I guess the titan will murder me in benches and gaming .

Im now Top 13 producer thanks to core 17.


----------



## PR-Imagery

Hmm, guess that's right then, 824 is putting out 44k on one card.


----------



## Avonosac

Hehe, yea... titan is still pretty bad in the price / performance category, but this new WU seems to be good for everyone,


----------



## lacrossewacker

Quote:


> Originally Posted by *labnjab*
> 
> That seems low for 580s. My gtx 570 classifieds at 875 get 41k ppd each
> I just got my 670 FTW Monday and haven't had a chance to play around with overclocking it. Its at 1150 now which is the factory boost clock. At what temps do 670s usually throttle back? I'm running [email protected] on it now which brings it up to 90-95% and only 50C temps


AFAIK, all the keplars throttle 13mhz at 70C (I think the Titan may have raised this limit though)


----------



## Fieldsweeper

Quote:


> Originally Posted by *Scorpion49*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fieldsweeper*
> 
> no lol im running a gtx titan and getting like 140-150K ppd
> 
> 
> 
> So yes you are running a 7663 and yes you are getting 2.5x the PPD I am at the same clock speed. We are both also on windows 8 so I'm wondering what the heck is up with my points.
Click to expand...

lol I thought you meant and AMD card ahahah, not the WU number lmao.

but ya, that total includes my cpu, so its my total overall ppd.

what proc you have?

i have everything running at full blast. cpu at 100% gpu at 100%

also the gpu is stock settings with the driver that was either on the disc or auto updated. (314.22)


----------



## Fieldsweeper

Quote:


> Originally Posted by *AndyE*
> 
> Finished my system today. Moved the Titan's to a DualXeon system.
> 
> After enabling the -beta flag, all GPUs got as their next WU 7663. With decent OC (to keep the temperature below 65 C), current estimate is 130k-140k PPD / GPU. 150k PPD / GPU seems possible. Total PPD is currently at 640k. I didn't have time today to start and configure a VM in Hyper-V to move the CPU cores into the VM and enable -bigadv there. Might get another 100-200k PPD with this but have to setup the VM first and migrate then the CPU WUs to the VM. If I have time, I'll do it over the weekend.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I still have the ATI 7970. Due their suboptimal cooling structure, can't put 4 of them in one case. Thinking about to split them into 2 smaller systems with entry level CPUs just to feed them and 2 PCI slots so far apart, that each of the 2 Gigabyte 7970 GE can get enough fresh air. In tight arrangements they collapsed under heavy load.
> 
> The "Green" system


Wow thats nice, how much was that set up, aside from the 4 k in vid cards lol;\\\

whats the specs?

also what is -bigadv


----------



## lacrossewacker

Quote:


> Originally Posted by *AndyE*
> 
> Finished my system today. Moved the Titan's to a DualXeon system.
> 
> After enabling the -beta flag, all GPUs got as their next WU 7663. With decent OC (to keep the temperature below 65 C), current estimate is 130k-140k PPD / GPU. 150k PPD / GPU seems possible. Total PPD is currently at 640k. I didn't have time today to start and configure a VM in Hyper-V to move the CPU cores into the VM and enable -bigadv there. Might get another 100-200k PPD with this but have to setup the VM first and migrate then the CPU WUs to the VM. If I have time, I'll do it over the weekend.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I still have the ATI 7970. Due their suboptimal cooling structure, can't put 4 of them in one case. Thinking about to split them into 2 smaller systems with entry level CPUs just to feed them and 2 PCI slots so far apart, that each of the 2 Gigabyte 7970 GE can get enough fresh air. In tight arrangements they collapsed under heavy load.
> 
> The "Green" system


Sweet jesus! My computer is a TI-83 in comparison!


----------



## Scorpion49

Quote:


> Originally Posted by *Fieldsweeper*
> 
> lol I thought you meant and AMD card ahahah, not the WU number lmao.
> 
> but ya, that total includes my cpu, so its my total overall ppd.
> 
> what proc you have?
> 
> i have everything running at full blast. cpu at 100% gpu at 100%
> 
> also the gpu is stock settings with the driver that was either on the disc or auto updated. (314.22)


CPU is 3570k but I was not folding that. GPU was running at 98-99% the whole time, just PPD was low. I've been seeing stats like this reported though for that WU:



Spoiler: Warning: Spoiler!



Titan @ Stock

Project ID: 7663 (R0, C15, G0)
Core: ZETA
Credit: 1600
Frames: 100

Name: Titan Slot 00
Path: 172.16.1.38-36330
Number of Frames Observed: 40

Min. Time / Frame : 00:01:09 - 143,988.8 PPD
Avg. Time / Frame : 00:01:11 - 137,947.9 PPD
Cur. Time / Frame : 00:01:12 - 114,380.5 PPD
R3F. Time / Frame : 00:01:11 - 116,310.5 PPD
All Time / Frame : 00:01:11 - 116,310.5 PPD
Eff. Time / Frame : 00:02:15 - 52,599.7 PPD


----------



## AndyE

Quote:


> Originally Posted by *Fieldsweeper*
> 
> whats the specs?


MB:Asus Z9PE-D8
CPU: 2 x E5-2687W (2x 8 core, plus HT; 3.1GHz normal frequency, 3.4GHZ turbo for all 8 cores, 3.8GHz with 1-2 cores)
Cooling: 2x Corsair H90
RAM: 128 GB ECC DDR3 1600MHz
Case: Cooler Master HAF-X NVidia Edition
PSU: Enermax Platimax 1500W (90%)
SSD: only for OS, Samsung 830 256GB
Data: All data is stored in a pretty fast homeserver
OS: Win8 Pro with Hyper-V enabled

Temp:
The CPUs are below 60 C when fully loaded (like with Linpack or y-crunch)
The GPUs need a bit higher fan speed to stay below 65C with the new FAHcore. Im currently running the fans at 75% (2 cards in the center), respective 70% at the edge

Power - measured at the wall outlet:
idle: 160 watt
CPU maxed out:: 500 watt
All 4 GPUs and CPUs loaded with the current FAH workload: 1000-1100 watt
The current x17 workload needs about 70% of TDP. I expect that over time with more optimizations by the FAH team and NVidia's OpenCL driver team this number will go up. A 1200 watt PSU is not sufficient for a quad setup (tested).
Quote:


> also what is -bigadv


This FAH option triggers to get the big and advanced work units which get higher bonus awards. This option only works in the Linux client (64bit) while Stanford hasn't released a 64bit version for Windows yet. But the driver for the GPUs is better in the Windows version. To get the best from both worlds, I've learned that people run their system in Windows (for the GPUs) and let the Unix version run in a VM. With this apoproach, higher PPDs are possible. Min # of cores need to be 16 and there are some minimum speed thresholds. Think my CPUs are fast enough to get over this minimum level. If they are the same amount of work will probably get 2-300k PPD for the CPU workloads. So it is better to stop the CPU work in the WIndows version and assign all capacity to the VM (all = all minus 4 cores needed to drive the GPUs)

Andy


----------



## Fieldsweeper

Quote:


> Originally Posted by *AndyE*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fieldsweeper*
> 
> whats the specs?
> 
> 
> 
> MB:Asus Z9PE-D8
> CPU: 2 x E5-2687W (2x 8 core, plus HT; 3.1GHz normal frequency, 3.4GHZ turbo for all 8 cores, 3.8GHz with 1-2 cores)
> Cooling: 2x Corsair H90
> RAM: 128 GB ECC DDR3 1600MHz
> Case: Cooler Master HAF-X NVidia Edition
> PSU: Enermax Platimax 1500W (90%)
> SSD: only for OS, Samsung 830 256GB
> Data: All data is stored in a pretty fast homeserver
> OS: Win8 Pro with Hyper-V enabled
> 
> Temp:
> The CPUs are below 60 C when fully loaded (like with Linpack or y-crunch)
> The GPUs need a bit higher fan speed to stay below 65C with the new FAHcore. Im currently running the fans at 75% (2 cards in the center), respective 70% at the edge
> 
> Power - measured at the wall outlet:
> idle: 160 watt
> CPU maxed out:: 500 watt
> All 4 GPUs and CPUs loaded with the current FAH workload: 1000-1100 watt
> The current x17 workload needs about 70% of TDP. I expect that over time with more optimizations by the FAH team and NVidia's OpenCL driver team this number will go up. A 1200 watt PSU is not sufficient for a quad setup (tested).
> Quote:
> 
> 
> 
> also what is -bigadv
> 
> Click to expand...
> 
> This FAH option triggers to get the big and advanced work units which get higher bonus awards. This option only works in the Linux client (64bit) while Stanford hasn't released a 64bit version for Windows yet. But the driver for the GPUs is better in the Windows version. To get the best from both worlds, I've learned that people run their system in Windows (for the GPUs) and let the Unix version run in a VM. With this apoproach, higher PPDs are possible. Min # of cores need to be 16 and there are some minimum speed thresholds. Think my CPUs are fast enough to get over this minimum level. If they are the same amount of work will probably get 2-300k PPD for the CPU workloads. So it is better to stop the CPU work in the WIndows version and assign all capacity to the VM (all = all minus 4 cores needed to drive the GPUs)
> 
> Andy
Click to expand...

damn man you need more rep


----------



## Fieldsweeper

can someone tell me what changed since yesterday?

how do you go from 145K PPD to half that?

the ONLY thing that changed was a computer reboot like a few hours ago


----------



## Hukkel

Perhaps it stopped folding for a few hours which made you lose a big chunk of your bonus?


----------



## martinhal

Quote:


> Originally Posted by *Fieldsweeper*
> 
> 
> 
> can someone tell me what changed since yesterday?
> 
> how do you go from 145K PPD to half that?
> 
> the ONLY thing that changed was a computer reboot like a few hours ago


Look at you cpu you are folding with 12 cores. Set it to fold with 11 cores.

Configure > Slots>click on cpu> and change the number of treads to - 1 of the client to auto select or manually to 11.


----------



## Fieldsweeper

Quote:


> Originally Posted by *martinhal*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fieldsweeper*
> 
> 
> 
> can someone tell me what changed since yesterday?
> 
> how do you go from 145K PPD to half that?
> 
> the ONLY thing that changed was a computer reboot like a few hours ago
> 
> 
> 
> Look at you cpu you are folding with 12 cores. Set it to fold with 11 cores.
> 
> Configure > Slots>click on cpu> and change the number of treads to - 1 of the client to auto select or manually to 11.
Click to expand...

I have always had it at -1 and its always been at 12 see below:



nvm just checked it, all seems good.

idk why it would have done that


----------



## AndyE

Strange.

I just assembled a small system with an i5-3470 CPU and 2 ATI 7970 cards.

Configured all like with other system. got 7663 work for both CPU. So far so good.
The 7970 do a TPF of 85 to 90 seconds, the Titans do 70-75 seconds. 10-15% apart.

Why the Titans get 140k PPD and the ATI 7970 only 30k PPD is not obvious. Is there a threshold in TPF were the bonus kicks in - i.e. when the card is faster than 80 seconds?

What are the TPF times of those with ATI 7970 cards which manage to get >100k PPD?

This is the current picture


----------



## Gungnir

Quote:


> Originally Posted by *AndyE*
> 
> Strange.
> 
> I just assembled a small system with an i5-3470 CPU and 2 ATI 7970 cards.
> 
> Configured all like with other system. got 7663 work for both CPU. So far so good.
> The 7970 do a TPF of 85 to 90 seconds, the Titans do 70-75 seconds. 10-15% apart.
> 
> Why the Titans get 140k PPD and the ATI 7970 only 30k PPD is not obvious. Is there a threshold in TPF were the bonus kicks in - i.e. when the card is faster than 80 seconds?
> 
> What are the TPF times of those with ATI 7970 cards which manage to get >100k PPD?
> 
> This is the current picture


Those 7970s should be getting way more than 30k PPD. Are they both at 99% usage? Also, what drivers are you using?


----------



## AndyE

Quote:


> Originally Posted by *Gungnir*
> 
> Those 7970s should be getting way more than 30k PPD. Are they both at 99% usage? Also, what drivers are you using?


This is what I expected as well.
Driver is 13.4

Performance according to TPF times seems to be ok. Just 20% slower than the Titan should bring in 20% less PPD - unless there is a threshold in the core which says: if Time to finish is below 80 sec TPF (8000 sec total), then another bonus level kicks.

My question to those who achieved more than 100k PPD with the ATI cards: What kind of TPF times did you get?

thanks,
Andy


----------



## martinhal

Getting 1 min 14 to 1 min 17 around 125 to 127 K ppd on 7970


----------



## AndyE

Quote:


> Originally Posted by *martinhal*
> 
> Getting 1 min 14 to 1 min 17 around 125 to 127 K ppd on 7970


Thanks.
Is this OC or default speed?


----------



## Fieldsweeper

i wanto to see someone who breaks 1 mill ppd,

i have seen one guy here with damn near 650K ppd, so SOME one COULD probably do that.

if you could get 4 actual 690s (8 gpus) somehow to work THAT would probably eaisly do it.

especially on a server dual cpu board and massive OCing on cpu and gpu









oh ya if he had two of those sure easy, I mean on ONE system



Spoiler: Warning: Spoiler!


----------



## labnjab

Arvidab can do close to 1,500,000 with all his rigs running, but hes been running Boinc on his rigs for the pentathlon so his daily folding average has dropped


----------



## Gungnir

Quote:


> Originally Posted by *Fieldsweeper*
> 
> i wanto to see someone who breaks 1 mill ppd,
> 
> i have seen one guy here with damn near 650K ppd, so SOME one COULD probably do that.
> 
> if you could get 4 actual 690s (8 gpus) somehow to work THAT would probably eaisly do it.
> 
> especially on a server dual cpu board and massive OCing on cpu and gpu
> 
> 
> 
> 
> 
> 
> 
> 
> 
> oh ya if he had two of those sure easy, I mean on ONE system
> 
> 
> 
> Spoiler: Warning: Spoiler!


4 7990s could break 1m PPD with overclocking (110+k per GPU, 8 GPUs), 4 690s would probably have more trouble (they do ~160k PPD each, I believe). Of course, that's assuming you could _find_ four 7990s, which wouldn't be easy.


----------



## arvidab

Quote:


> Originally Posted by *AndyE*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Gungnir*
> 
> Those 7970s should be getting way more than 30k PPD. Are they both at 99% usage? Also, what drivers are you using?
> 
> 
> 
> This is what I expected as well.
> Driver is 13.4
> 
> Performance according to TPF times seems to be ok. Just 20% slower than the Titan should bring in 20% less PPD - unless there is a threshold in the core which says: if Time to finish is below 80 sec TPF (8000 sec total), then another bonus level kicks.
> 
> My question to those who achieved more than 100k PPD with the ATI cards: What kind of TPF times did you get?
> 
> thanks,
> Andy
Click to expand...

With the way QRB works, 20% slower won't be 20% less PPD, it will be more than 20% less. I don't know how much though as I am unfamiliar with how Stanford calculates exactly.

That said, with a true TPF of 90sec you should be just shy of 100k PPD (96.6k). The PPD prediction on HFM is much, better than the one in FahControl.

Quote:


> Originally Posted by *Fieldsweeper*
> 
> i wanto to see someone who breaks 1 mill ppd,
> 
> i have seen one guy here with damn near 650K ppd, so SOME one COULD probably do that.
> 
> if you could get 4 actual 690s (8 gpus) somehow to work THAT would probably eaisly do it.
> 
> especially on a server dual cpu board and massive OCing on cpu and gpu
> 
> 
> 
> 
> 
> 
> 
> 
> 
> oh ya if he had two of those sure easy, I mean on ONE system
> 
> 
> 
> Spoiler: Warning: Spoiler!


Yea, someone donate a couple of Titans, might get away with one, and when they are Foldable in Linux, sure. Only need ~150k more on one rig to break the 1mil marker.

I was at about 1.5mil PPD with a few machines going though.


----------



## AndyE

Quote:


> Originally Posted by *arvidab*
> 
> With the way QRB works, 20% slower won't be 20% less PPD, it will be more than 20% less. I don't know how much though as I am unfamiliar with how Stanford calculates exactly.
> 
> That said, with a true TPF of 90sec you should be just shy of 100k PPD (96.6k). The PPD prediction on HFM is much, better than the one in FahControl.


Thanks.

Is there a way to check how many points each submitted and finished WU I submitted really got? Not forecasted PPD, but after-the fact PPDs?


----------



## Wheezo

Quote:


> Originally Posted by *AndyE*
> 
> Thanks.
> 
> Is there a way to check how many points each submitted and finished WU I submitted really got? Not forecasted PPD, but after-the fact PPDs?


http://folding.extremeoverclocking.com/individual_list.php?s=

Just search your name on the left-hand side.


----------



## AndyE

Quote:


> Originally Posted by *Wheezo*
> 
> http://folding.extremeoverclocking.com/individual_list.php?s=
> 
> Just search your name on the left-hand side.


Thank you.
Unfortunately, it doesn't work. Wouldn't find my folding name.

Looking at the table, it looks like its based on 24hr aggregated numbers (and week, month, ..) are shown.

What I would like to see is the list of WUs i submitted with (WU number, points received, etc ....)

rgds,
Andy


----------



## ZDngrfld

Quote:


> Originally Posted by *AndyE*
> 
> Thank you.
> Unfortunately, it doesn't work. Wouldn't find my folding name.
> 
> Looking at the table, it looks like its based on 24hr aggregated numbers (and week, month, ..) are shown.
> 
> What I would like to see is the list of WUs i submitted with (WU number, points received, etc ....)
> 
> rgds,
> Andy


HFM has a WU History Viewer built in that will show that kind of stuff


----------



## arvidab

Quote:


> Originally Posted by *ZDngrfld*
> 
> Quote:
> 
> 
> 
> Originally Posted by *AndyE*
> 
> Thank you.
> Unfortunately, it doesn't work. Wouldn't find my folding name.
> 
> Looking at the table, it looks like its based on 24hr aggregated numbers (and week, month, ..) are shown.
> 
> What I would like to see is the list of WUs i submitted with (WU number, points received, etc ....)
> 
> rgds,
> Andy
> 
> 
> 
> HFM has a WU History Viewer built in that will show that kind of stuff
Click to expand...

Yes, but afaik it's still estimates, though they are probably pretty close to what you actually get.

It may take a day before you show up on the stats, Andy.


----------



## PandaSPUR

Any way to fix HFM so it actually shows PPD using these new WUs?

Or I'll just stick to using FAHControl for everything. no big deal.


----------



## Wheezo

Quote:


> Originally Posted by *PandaSPUR*
> 
> Any way to fix HFM so it actually shows PPD using these new WUs?
> 
> Or I'll just stick to using FAHControl for everything. no big deal.


Edit > Preferences > Options tab > Change "calculate PPD based on" to "Effective rate"



Then add a "C" to the "Project Download URL" space in the "Web Settings" tab.



Then Download the new project values- Tools > "Download Projects from Stanford"

Should now show the proper values in HFM.

Hope that makes sense lol.


----------



## ZDngrfld

You shouldn't have to do the effective rate thing any longer. I'm running last frame and it works fine. You will have to do the Project Download URL, though


----------



## Wheezo

Quote:


> Originally Posted by *ZDngrfld*
> 
> You shouldn't have to do the effective rate thing any longer. I'm running last frame and it works fine. You will have to do the Project Download URL, though


I had no idea that part isn't necessary any more, thanks for the clarification


----------



## DizZz

Quote:


> Originally Posted by *labnjab*
> 
> Arvidab can do close to 1,500,000 with all his rigs running, but hes been running Boinc on his rigs for the pentathlon so his daily folding average has dropped


after scubadiver builds his second 4p system this weekend, he'll be pumping out well over 1mil as well


----------



## PandaSPUR

Quote:


> Originally Posted by *Wheezo*
> 
> Edit > Preferences > Options tab > Change "calculate PPD based on" to "Effective rate"
> 
> 
> 
> Then add a "C" to the "Project Download URL" space in the "Web Settings" tab.
> 
> 
> 
> Then Download the new project values- Tools > "Download Projects from Stanford"
> 
> Should now show the proper values in HFM.
> 
> Hope that makes sense lol.


Quote:


> Originally Posted by *ZDngrfld*
> 
> You shouldn't have to do the effective rate thing any longer. I'm running last frame and it works fine. You will have to do the Project Download URL, though


Ahah, that second part is what I couldn't find info on. Thanks, rep to you both.


----------



## rollingdice

Just getting my core_17. Yields 24k ppd on my 560Ti... Going to revert back to normal WU.


----------



## aas88keyz

Just received my other GTX 560 Ti 448 in the mail today. Don't know that it matters that they are set to Sli but they are now and both are averaging 32kppd a piece. That and with a bump from my FX-8120 @ 6 core and my other folding rig (Phenom II x4 965BE) I am totaling 94 to 96kppd total. This changes a whole lot for me. Too bad my sli setup does nothing for my game performance. Though I am first and foremost a folder/cruncher for special events and barely game a handful of times a month.

Keep on foldin'!


----------



## Fieldsweeper

I woke up this morning and noticed these temps?

why would the mobo continue to say -1 degrees?

i paused all the foldging a few seconds before the picture was take, but then the proc dropped to the low 20s i think thats as low as I have ever seen it, however hardware monitor shows different temps, which should I trust?

what could cause that, usually there is a 5-10 degree or so spread sometimes in the temp shown on the Ai suite and hWmonitor pro.


----------



## [CyGnus]

Core 17 powah HD7870 @ 1200/1375 pumping 65/66K with only 161W (kill-a-watt)


----------



## Fieldsweeper

Quote:


> Originally Posted by *Fieldsweeper*
> 
> I woke up this morning and noticed these temps?
> 
> why would the mobo continue to say -1 degrees?
> 
> i paused all the foldging a few seconds before the picture was take, but then the proc dropped to the low 20s i think thats as low as I have ever seen it, however hardware monitor shows different temps, which should I trust?
> 
> what could cause that, usually there is a 5-10 degree or so spread sometimes in the temp shown on the Ai suite and hWmonitor pro.


bumping,

not sure if you folders ever had a similar issue or where I should go


----------



## Durquavian

Quote:


> Originally Posted by *Fieldsweeper*
> 
> bumping,
> 
> not sure if you folders ever had a similar issue or where I should go


Actually alot of people have issues with CPUiD. Our advice is always get HWiNFO64 or 32 it is way better than all others and seems to have less issues.


----------



## Fieldsweeper

Quote:


> Originally Posted by *Durquavian*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fieldsweeper*
> 
> bumping,
> 
> not sure if you folders ever had a similar issue or where I should go
> 
> 
> 
> Actually alot of people have issues with CPUiD. Our advice is always get HWiNFO64 or 32 it is way better than all others and seems to have less issues.
Click to expand...

cpuid? you mean the brand or something??

i a more worried about the Ai suite as the mobo is showing -1 c lol, i Wish it was that cold.

idk y but i tend to think that staying with the hardware's software (lol) is ideal, just me though

i mean is it safe to assume that ai suite is better for monitoring a RIVE mobo?

i can't help but believe it was specifically designed for it.


----------



## Fieldsweeper

A side note question though:

i KNOW A GTX 690 is a dual GPU card (a single PHYSICAL card that is running in SLI mode technically speaking, SO the max is 2 physical cards for 4 way SLI, which is the max number of GPU's allowqed in sli

my question is, after seeing that you can technically mix and match gpu's just not in sli, for exaple using one as a physX processor.

would it be possible

to put 4 gtx 690's (each in their own 2 way sli) or 2 sets of 4 way sli and use each of those CARDS separatly on fah core, or use a 4 way sli, and another 4 way sli on another clien in the same computer, I can;t help but think it should work.

the benefits would be great if you could run in x16 x16 8 8 or full x16 with ROX Xpander or the asrock extreme11, or maybe the RIVE and just go x16,x8,x8,x8, imagine what you could get PPD with 4 single gt690s each as a separate unit (no sli other than the internal sli with the dual gpu aspect) or as 2 - 4way sli set ups. NOT 8way sli, i know thats not possible, but something along this lines has to be doable, how else are they using tonnes of them in super computers and render farms.


----------



## cam51037

Quote:


> Originally Posted by *Fieldsweeper*
> 
> A side note question though:
> 
> i KNOW A GTX 690 is a dual GPU card (a single PHYSICAL card that is running in SLI mode technically speaking, SO the max is 2 physical cards for 4 way SLI, which is the max number of GPU's allowqed in sli
> 
> my question is, after seeing that you can technically mix and match gpu's just not in sli, for exaple using one as a physX processor.
> 
> would it be possible
> 
> to put 4 gtx 690's (each in their own 2 way sli) or 2 sets of 4 way sli and use each of those CARDS separatly on fah core, or use a 4 way sli, and another 4 way sli on another clien in the same computer, I can;t help but think it should work.
> 
> the benefits would be great if you could run in x16 x16 8 8 or full x16 with ROX Xpander or the asrock extreme11, or maybe the RIVE and just go x16,x8,x8,x8, imagine what you could get PPD with 4 single gt690s each as a separate unit (no sli other than the internal sli with the dual gpu aspect) or as 2 - 4way sli set ups. NOT 8way sli, i know thats not possible, but something along this lines has to be doable, how else are they using tonnes of them in super computers and render farms.


Yeah that would work. Not sure how much power they pull, you'd need a big PSU though.


----------



## Fieldsweeper

Oh i am sure of it, but an extra 850+ watt ps just for the extra 2 vid cards / W/C would be fine i am sure lol


----------



## Hemi177

Just got some of the new WU's on my 7950, and at stock I was getting 71K PPD. But when I got to the end, my workunit stopped at 99.99% and hasn't moved since. Tried a reboot, and tried the waiting game for 2hrs but no dice. Anybody have any tips? Should I just remove the slot and re add it? Also, despite the high ppd I was getting rubbish credit return, although this was the first beta WU so I assume the bonus would have to build this up?


----------



## AndyE

Quote:


> Originally Posted by *Fieldsweeper*
> 
> would it be possible
> 
> to put 4 gtx 690's (each in their own 2 way sli) or 2 sets of 4 way sli and use each of those CARDS separatly on fah core, or use a 4 way sli, and another 4 way sli on another clien in the same computer, I can;t help but think it should work.
> 
> the benefits would be great if you could run in x16 x16 8 8 or full x16 with ROX Xpander or the asrock extreme11, or maybe the RIVE and just go x16,x8,x8,x8, imagine what you could get PPD with 4 single gt690s each as a separate unit (no sli other than the internal sli with the dual gpu aspect) or as 2 - 4way sli set ups. NOT 8way sli, i know thats not possible, but something along this lines has to be doable, how else are they using tonnes of them in super computers and render farms.


Sure,
the data over the SLI connector is to control AFR (Alternate Frame Rendering). Fpr compute oriented jobs, SLI is not needed, but you end up in other dependencies.

Depending on the communication pattern of the application, there might be other scaling barriers like mem bandwidth of the host. If the application is embarassingly parallel, there is not much interaction between the different GPUs individually churning happily along.

WRT to your PCI, my current experince with FAH (only doing it a few days now) is that it is quite insensitive to PCI speed.
To cut a long story short, in CFG codes (Computational fluid dynamics) your system with 4x 690s might be a bit out of balance, but for FAH which isn't using double precision FP (a design limitation in the 680 and 690 made by NVidia), you might get very good results.

With power:
You need 75 Watt delivered per socket by the motherboard. If you have a second PSU in mind you need to get the second PSU started in sync with the first one. There are solutions out there, it just doesn't work out of the box.
If my experienxe with the Titans is an indication for such a setup, for the time being a 1500 Watt PSU might suffice. The current code is driving the Kepler GPU at approx 70-75% of its TDP.
375 watt TDP per card * 75% * 4 = 1125 watt. If you take the CPU only for feeding the GPUs, the energy budget with a 1500 watt PSU is ok.
Caveat: When over time the OpenCL drivers get better and FAH improves their computational density you will have to change to a 2 PSU setup.


----------



## Fieldsweeper

Well here is the system its a 4P (64 threads) board and running 6 gtx 690's independantly:



Details:


Spoiler: Warning: Spoiler!



I wish


----------



## AndyE

Quote:


> Originally Posted by *Fieldsweeper*
> 
> Well here is the system its a 4P (64 threads) board and running 6 gtx 690's independantly:


Congrats









1,35 PPPD (Peta Points Per Day)
Basically you achieved to do the global historic output of millions of folders in a few seconds ...... That's something to celebrate.... you've done your share buddy ....


----------



## AndyE

Can't compete with the ultra fast rig of fieldsweeper, but I found the root of the issue with my low PPD.

I built 2 identical systems. Each:
CPU: Celeron G1610 (not folding, just feeding the GPUs)
GPU: 2 x AMD 7970

Power (measured at the wall outlet):
idle, no GPUs: 30 watt
idle, 2 GPUs: 80 watt
running FAH: 380 watt

So far, so good. Same software, same configuration.

Both systems had with core 7663 quite stable TPFs of approx 1min 26sec.
Yet the PPD by FAHControl diverged wildely between the 2 systems:

System 1:


System 2:


By "accident" I realized that one of the 2 newly set up systems had a wrong date set in CMOS. It was one day ahead of the real date.

Changing the date in the system back to the original date did 2 things:
1) It killed the running FAH client
after a reboot:
2) The PPD went up to be in line with system 2.

Lesson learned:
If your PPD seems unnaturally low, check your time and date setting .....

Here are the twins: Twin1 and Twin2


The entry level CPU is fast enough to feed two GPUs. Adding a faster CPU to fold would make the efficiency worse (cost, energy)



Andy


----------



## Fieldsweeper

lol, mine was a joke









BUT thats so wired that the date was off so it reported way less points, I wonder if it was a conflict from server to client, and whether you actually got the points it said or the server gave you the points you should have had


----------



## DizZz

Quote:


> Originally Posted by *AndyE*
> 
> Can't compete with the ultra fast rig of fieldsweeper, but I found the root of the issue with my low PPD.
> 
> Andy


Why aren't you folding for OCN? Nice rigs btw


----------



## Asustweaker

The ppd was off from the date, due to the frame time/download time. It calculates when it was down loaded, vs when it is projected to finish. If the date is 1 day behind, it would figure you downloaded it yesterday, and figure the points form that time.


----------



## Fieldsweeper

hmmm try setting you clock back a few hours or days then Hehehehe lol


----------



## DizZz

Quote:


> Originally Posted by *Fieldsweeper*
> 
> hmmm try setting you clock back a few hours or days then Hehehehe lol


Would that actually work?! I kind of want to test that out..


----------



## Fieldsweeper

Quote:


> Originally Posted by *DizZz*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fieldsweeper*
> 
> hmmm try setting you clock back a few hours or days then Hehehehe lol
> 
> 
> 
> Would that actually work?! I kind of want to test that out..
Click to expand...

thats what Im saying lol, if it was so simple id laugh

also here is my next folding machine project:


Spoiler: Warning: Spoiler!






Spoiler: Warning: Spoiler!



LOL, Jk again


----------



## arvidab

Quote:


> Originally Posted by *DizZz*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Fieldsweeper*
> 
> hmmm try setting you clock back a few hours or days then Hehehehe lol
> 
> 
> 
> Would that actually work?! I kind of want to test that out..
Click to expand...

Yea, sure go ahead and do it!









Nope, it won't work as it goes of the times Stanfords servers are running at. However, if you were to be able to change those...


----------



## Asustweaker

i really wonder if setting the clock back would cause you to lose points?? Would the server time stamp your DL internally, or read off of the system clock?

So in theory, if your clock was off when you DL.'ed the unit, but you fixed it when it finish You could lose points??? I don't think that's right. If he fixed his clock and the PPd went to where it should be, then it only works off the server clock.

Idon't know.... now I'm all stupid confused


----------



## DizZz

Quote:


> Originally Posted by *arvidab*
> 
> Yea, sure go ahead and do it!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Nope, it won't work as it goes of the times Stanfords servers are running at. However, if you were to be able to change those...


Lol but if that is the case, why did resetting the computer's clock fix AndyE's ppd problem?


----------



## AndyE

Quote:


> Originally Posted by *DizZz*
> 
> Lol but if that is the case, why did resetting the computer's clock fix AndyE's ppd problem?


It fixed the PPD forecasting issue I had. Don't know it had or will have any impact on the real points assigned to finish work units


----------



## goodtobeking

Hey guys I just switched my 7970 over to do some folding with this monstorous WUs. I set the client-type beta in the slots configuration, but I have yet to get any core17 WUs. I failed out of the first 4 WUs I got because of unstable OC, but they were all core16WUs. Is there anything else I need to do or is it just the luck of the draw??


----------



## Krusher33

Unless they've stopped them again, you should be getting them from get-go. After you complete whatever unit you were on anyways.


----------



## goodtobeking

Well I just checked on the computer and its running fine now without any errors. Still folding a core16 WU. Not sure how to abort it so I will just let it run overnight to see what it does.


----------



## Paramount

how much i get from beta vesion for gtx 670 ??


----------



## DizZz

Quote:


> Originally Posted by *Paramount*
> 
> how much i get from beta vesion for gtx 670 ??


Depending on your overclock, anywhere from 45-60k ppd


----------



## cam51037

Quote:


> Originally Posted by *DizZz*
> 
> Depending on your overclock, anywhere from 45-60k ppd


I'm getting 75k+, sometimes 80-83k PPD with my 670 at 1.3GHz.


----------



## DizZz

Quote:


> Originally Posted by *cam51037*
> 
> I'm getting 75k+, sometimes 80-83k PPD with my 670 at 1.3GHz.


Oh wow i thought that was what 680s were getting. That's awesome


----------



## cam51037

Quote:


> Originally Posted by *DizZz*
> 
> Oh wow i thought that was what 680s were getting. That's awesome


Well, mine technically is faster than a stock 680, so yeah.

Not sure what a stock 670 gets though, aren't they clocked around 900MHz? If so, then my 400MHz OC would get way more PPD.


----------



## goodtobeking

Quote:


> Originally Posted by *goodtobeking*
> 
> Well I just checked on the computer and its running fine now without any errors. Still folding a core16 WU. Not sure how to abort it so I will just let it run overnight to see what it does.


Well left it folding last night and it picked up a new core WU after it finished the other one.

Sucks as I was only getting 50kPPD I think it has something to do with my driver set, I have the ones optimized for DiRT in BOINC


----------



## Doc_Gonzo

My GPU's are idle after I stopped work on the GPU project in the Boinc Pentathlon and running the drivers optimized for Dirt, I started the folding client and immediately picked up a core 17 work unit on each card








These are great as they barely impact CPU performance and I'm looking at 73K PPD per card


----------



## nova4005

Quote:


> Originally Posted by *Doc_Gonzo*
> 
> My GPU's are idle after I stopped work on the GPU project in the Boinc Pentathlon and running the drivers optimized for Dirt, I started the folding client and immediately picked up a core 17 work unit on each card
> 
> 
> 
> 
> 
> 
> 
> 
> These are great as they barely impact CPU performance and I'm looking at 73K PPD per card


What are your 7950's clocked at? Mine is on the 13.4 drivers and clocked at 1000 on the core and it is getting 86-88k on core 17. If you just started folding on them then it may go up when it averages out. These beta 17's are awesome on 79xx cards, where my 7970 lightning is at 111k!


----------



## [CyGnus]

nova4005 not only 79xx i would say all the 7K series shine with these core 17 units lets hope they are here to stay


----------



## Doc_Gonzo

Quote:


> Originally Posted by *nova4005*
> 
> What are your 7950's clocked at? Mine is on the 13.4 drivers and clocked at 1000 on the core and it is getting 86-88k on core 17. If you just started folding on them then it may go up when it averages out. These beta 17's are awesome on 79xx cards, where my 7970 lightning is at 111k!


Mine are both clocked the same as yours - 1Ghz. I have them clocked higher for Boinc but remembered that they didn't play nice at the same overclock when Folding. I don't have time to go through system crashes when the Pentathlon is running but I dare say they will clock higher when I have the time to tinker








Yes, I've just started folding on them again after a little break, so hopefully my PPD will go up








Edit to add, I'm still on the 13.2 drivers if that makes any difference


----------



## AndyE

Quote:


> Originally Posted by *[CyGnus]*
> 
> nova4005 not only 79xx i would say all the 7K series shine with these core 17 units lets hope they are here to stay


After a couple of x17 cores, one of my 7970 picked up a x16 WU again.
Back to 6k PPD ......


----------



## Asustweaker

I've been crunching the core17's for a while now. I have a strange issue though. The drivers I have are not the 266.58's that were required before to keep the CPU usage down.
I'm getting some weird core usage. I assigned the process for the core_17's to their own core in task manager. One core_17 uses the entire CPU thread, the other uses about 10%.

What drivers use the least amount of CPU???


----------



## gboeds

Quote:


> Originally Posted by *Asustweaker*
> 
> I've been crunching the core17's for a while now. I have a strange issue though. The drivers I have are not the 266.58's that were required before to keep the CPU usage down.
> I'm getting some weird core usage. I assigned the process for the core_17's to their own core in task manager. One core_17 uses the entire CPU thread, the other uses about 10%.
> 
> What drivers use the least amount of CPU???


when the latest batch of core 17 came out, I had to change drivers because the client would crash immediately on my machines running the 266.58 drivers.

I just went to the latest drivers, though, which use about 12% CPU per GPU, have not experimented with other drivers...any NVIDIA folders out there found a better driver?


----------



## labnjab

I fired up all 3 gpus folding again last night and I'm not doing too bad. Each 570 is doing 41k ppd and my 670 is doing 71,500 ppd so Im sitting at 153k ppd with out any cpus folding


----------



## Doc_Gonzo

Does PCI-E bandwidth play any part in PPD? In the rig with 2 x 7950's. estimated PPD is 65 - 66K PPD per card. In my other rig with a single 7950, estimated PPD is 83K








(all cards at same speed)


----------



## Paramount

ohhh i get 83k @1293mhz ...Awesome over previous wu which only get 29k


----------



## nova4005

Quote:


> Originally Posted by *[CyGnus]*
> 
> nova4005 not only 79xx i would say all the 7K series shine with these core 17 units lets hope they are here to stay


Yes CyGnus, I have seen your 7870 pulling some good numbers as well. I would have to say that core 17 really like 7xxx cards in general.








Quote:


> Originally Posted by *Doc_Gonzo*
> 
> Does PCI-E bandwidth play any part in PPD? In the rig with 2 x 7950's. estimated PPD is 65 - 66K PPD per card. In my other rig with a single 7950, estimated PPD is 83K
> 
> 
> 
> 
> 
> 
> 
> 
> (all cards at same speed)


I don't know for sure but both of my cards are running in PCI-e 3.0 slots, and my 7950 right now is at 88k ppd.


----------



## Doc_Gonzo

Quote:


> Originally Posted by *nova4005*
> 
> I don't know for sure but both of my cards are running in PCI-e 3.0 slots, and my 7950 right now is at 88k ppd.


Hmmmm, Both my computers are running PCI-E 3.0 slots but the two cards are @ x 8 and the single is @ x 16. The card in the x 16 slot is close to your 88K PPD, while the other two are almost 20K lower. I'll see if it evens out tomorrow, after I've been running it for a while!


----------



## nova4005

Quote:


> Originally Posted by *Doc_Gonzo*
> 
> Hmmmm, Both my computers are running PCI-E 3.0 slots but the two cards are @ x 8 and the single is @ x 16. The card in the x 16 slot is close to your 88K PPD, while the other two are almost 20K lower. I'll see if it evens out tomorrow, after I've been running it for a while!


Then it should be the same, because mine are running at 8x as well. Maybe they will even out.


----------



## Doc_Gonzo

Quote:


> Originally Posted by *nova4005*
> 
> Then it should be the same, because mine are running at 8x as well. Maybe they will even out.


I just noticed but for some reason I'm gettign 85% GPU usage on the dual cards and 98% on the single card - strange


----------



## nova4005

Quote:


> Originally Posted by *Doc_Gonzo*
> 
> I just noticed but for some reason I'm gettign 85% GPU usage on the dual cards and 98% on the single card - strange


I was trying to boinc on all my threads and I was getting low usage on my gpus folding. I freed up a thread for them to use and they both stay at 98+% usage now. Do you have a thread free for them?


----------



## Krusher33

You all still getting core 17's? I just fired up my client and I get a core 16 instead.


----------



## nova4005

Quote:


> Originally Posted by *Krusher33*
> 
> You all still getting core 17's? I just fired up my client and I get a core 16 instead.


4 out of 5 gpus are on core 17, and the 5th just snagged a core 16. They must be running low on units.


----------



## Krusher33

Figures...


----------



## ASSSETS

AMD 5870 stock 850MHz
WU 7663
TPF 4:47
PPD 16900


----------



## Krusher33

Is that recent?


----------



## goodtobeking

What drivers are best for these?? Now onto my second WU with only 50k PPD with a 7970. Going to DL the "Amd 13.1 w/ 12.8 opencl & sdk for folding" from Darkryder's page unless anyone else can point me to more optimized drivers.

Also, would these run on my 6970s?? Or would those be stuck with the core 16 WUs at like 5k a piece??


----------



## ericeod

Quote:


> Originally Posted by *goodtobeking*
> 
> What drivers are best for these?? Now onto my second WU with only 50k PPD with a 7970. Going to DL the "Amd 13.1 w/ 12.8 opencl & sdk for folding" from Darkryder's page unless anyone else can point me to more optimized drivers.
> 
> Also, would these run on my 6970s?? Or would those be stuck with the core 16 WUs at like 5k a piece??


I'm running the 13.4 stock AMD drivers and am getting approx. 120k PPD. My HTPC with a 7770 is also running the stock 13.4 AMD drivers and it is getting approx. 28k PPD.


----------



## DizZz

Yeah I'm running stock 13.4 and getting about 125k ppd per 7970


----------



## Hawk777th

Are these back up? I wanna try them on my Titans if they are.


----------



## goodtobeking

Canceled current DL and switching to 13.4 driver DL. Thanks guys.


----------



## DizZz

Quote:


> Originally Posted by *Hawk777th*
> 
> Are these back up? I wanna try them on my Titans if they are.


Yes they are! Titan's are getting about 135k ppd


----------



## Hawk777th

Apiece? I read initial post but I guess I am lost on how to set it up. This is what I have. Is this correct?


----------



## DizZz

Quote:


> Originally Posted by *Hawk777th*
> 
> Apiece? I read initial post but I guess I am lost on how to set it up. This is what I have. Is this correct?


Yes! That looks good. Make sure you set those settings for both of your cards


----------



## Hawk777th

Awesome! Hope I pick a few up! During CC my rig only got 70K a day lol. Cant wait to see 6 digits of folding power!

Is there anyway when you launch the folding app not to have it automatically pull WUs? Drives me nuts when it loads everything up sometimes I only wanna use my GPUs!


----------



## Hawk777th

Thanks so much for helping me out guys! This is so crazy! I am pulling 280K PPD! When I bought my Titans they pulled the Core 17s out and I was stuck on 15s! I was so disappointed since I partially bought them for folding! These Core 17s have made the purchase so worth it!

Thanks again!


----------



## Doc_Gonzo

Quote:


> Originally Posted by *nova4005*
> 
> I was trying to boinc on all my threads and I was getting low usage on my gpus folding. I freed up a thread for them to use and they both stay at 98+% usage now. Do you have a thread free for them?


No, I don't have a thread free for them, but I don't have a thread free for he other card either. You're right though - If I stop Boinc, usage shoots up to 97%. Oh well, I just didn't want the cards sitting idle while the Pentathlon was running. I'm folding instead of running Dirt so that I don't have to take a thread away from Boinc and I'll just have to suffer the slightly lower usage


----------



## nova4005

Quote:


> Originally Posted by *Doc_Gonzo*
> 
> No, I don't have a thread free for them, but I don't have a thread free for he other card either. You're right though - If I stop Boinc, usage shoots up to 97%. Oh well, I just didn't want the cards sitting idle while the Pentathlon was running. I'm folding instead of running Dirt so that I don't have to take a thread away from Boinc and I'll just have to suffer the slightly lower usage


Even without the thread free you are still getting good ppd from those cards. I am also folding on all my cards until the pentathlon is finished, and then when its finished I am heading back to WCG help conquer cancer on my 79xx cards. My 2 nvidia cards will stay folding and my 6970 will go back and forth.


----------



## Doc_Gonzo

Quote:


> Originally Posted by *nova4005*
> 
> Even without the thread free you are still getting good ppd from those cards. I am also folding on all my cards until the pentathlon is finished, and then when its finished I am heading back to WCG help conquer cancer on my 79xx cards. My 2 nvidia cards will stay folding and my 6970 will go back and forth.


Yep, not bad at all








I'll be going back to Dirt after the Pentathlon and making a run for 1 Billion points. After that, I intend on diversifying a bit and going after projects that I am interested in helping - not just the big point projects. I quite like folding and am interested in researching a 2P or 4P system. Electricity use is becoming a big factor though


----------



## nova4005

Quote:


> Originally Posted by *Doc_Gonzo*
> 
> Yep, not bad at all
> 
> 
> 
> 
> 
> 
> 
> 
> I'll be going back to Dirt after the Pentathlon and making a run for 1 Billion points. After that, I intend on diversifying a bit and going after projects that I am interested in helping - not just the big point projects. I quite like folding and am interested in researching a 2P or 4P system. Electricity use is becoming a big factor though


You and me both. I have been wanting an intel 2p for awhile now, if I can ever get the money together I will have one. I would have one already but I couldn't talk my girlfriend int skipping vacation this year.







Electricity is a big deal, I have three systems that run 24/7 and it raises the power bill considerably, but if I ever build a 2p I would consolidate into just two systems running. I think it would be about the same power wise if I did that.


----------



## Avonosac

Quote:


> Originally Posted by *DizZz*
> 
> Yes they are! Titan's are getting about 135k ppd


I was getting 165k+.

Gotta figure out what happened with my rig as soon as I have a second... Got a BSOD on Friday and it hasn't wanted to post since


----------



## lacrossewacker

When I use the beta tag on my 670, it keeps saying "received bad unit" and tries to download another. Says I received another bad unit. Redownloads like 3-4 times, then just says "failed"

I was wondering if just doing a new windows reformat would help.


----------



## lacrossewacker

Not sure why it keeps giving me the "Bad_work_unit" 

using "client-type / beta" and that "-extra-core-args / -gpu-vendor=nvidia" tag. 

Darn you! my 670 was doing so well on this WU


----------



## jomama22

Z
Quote:


> Originally Posted by *lacrossewacker*
> 
> 
> 
> 
> Not sure why it keeps giving me the "Bad_work_unit"
> 
> 
> 
> 
> 
> 
> 
> 
> 
> using "client-type / beta" and that "-extra-core-args / -gpu-vendor=nvidia" tag.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Darn you! my 670 was doing so well on this WU


More voltage or lower the oc by a smidge.


----------



## Shogon

Quote:


> Originally Posted by *Hawk777th*
> 
> Thanks so much for helping me out guys! This is so crazy! I am pulling 280K PPD! When I bought my Titans they pulled the Core 17s out and I was stuck on 15s! I was so disappointed since I partially bought them for folding! These Core 17s have made the purchase so worth it!
> 
> Thanks again!


Have you overclocked your titans at all? Mine at 1176 MHz says around 165k PPD.


----------



## Hawk777th

Nah I actually leave them stock clocks. And set a 64C Target for temp when I am folding since that is what I am comfortable with. At night my house cools down and I do about 1050MHZ at this temp target per card. During the day its been hot so I have been doing around 850-950.

When gaming though they go to 1150MHZ with a 75C Target 106% Power.


----------



## Caleal

Quote:


> Originally Posted by *lacrossewacker*
> 
> 
> 
> 
> Not sure why it keeps giving me the "Bad_work_unit"
> 
> 
> 
> 
> 
> 
> 
> 
> 
> using "client-type / beta" and that "-extra-core-args / -gpu-vendor=nvidia" tag.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Darn you! my 670 was doing so well on this WU


Probably drivers, you need the 3xx series drivers to run the new WUs.
Make sure you click the box to do a clean driver installation.


----------



## arvidab

Finally got my 6970 folding some 7663, the 13.5b2 drivers works and gives it full utilization, or so I though. It sometimes goes down in usage, sometimes for 10sec and occasionally up to a minute where it goes to 0%. Is anyone else seeing this too? I had a modded 13.2 driver that wouldn't work before, received the bad WU message.

Graph of usage, looks like this:


Anyway, PPD on stock is about 15-16k (fastest TPF is 4:50), on _16 I was seeing 8-9k on it, so it's an improvement on this card too.

*Anyone with a single 7970, 680 or Titan that have any power consumption figures when only folding on your one GPU?* I'd love too see how power hungry they are, my 6970 draws 220W for reference.

Quote:


> Originally Posted by *lacrossewacker*
> 
> 
> 
> 
> Not sure why it keeps giving me the "Bad_work_unit"
> 
> 
> 
> 
> 
> 
> 
> 
> 
> using "client-type / beta" and that "-extra-core-args / -gpu-vendor=nvidia" tag.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Darn you! my 670 was doing so well on this WU


My 6970 did the same, installed the latest drivers I could find for that and it seem to have worked.


----------



## Hukkel

I got two core 15s this afternoon instead of the core 17s mehhh


----------



## Bal3Wolf

any one seeing alot of crashes of fah core or client forgot which it said i ran the old 7662 without a issue thru the chimp but i loaded it up today for the fat and has had it crash twice and just sit at 0% using the 13.5b2 on my 2x 7970s.


----------



## Krusher33

Here I was thinking it was my overclocks giving me issues...


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> Here I was thinking it was my overclocks giving me issues...


lol so im not alone then i am running the older ver of fah 7.2.9 i dont like the newer ones that use the website to control it.


----------



## DizZz

Quote:


> Originally Posted by *Bal3Wolf*
> 
> lol so im not alone then i am running the older ver of fah 7.2.9 i dont like the newer ones that use the website to control it.


7.3.6 (which is the newest) doesn't use a browser to control anything. You still have FAHControl.


----------



## Bal3Wolf

Quote:


> Originally Posted by *DizZz*
> 
> 7.3.6 (which is the newest) doesn't use a browser to control anything. You still have FAHControl.


kool will give that a try and see if it will stop crashing and just doing nothing lol.

still uses web brower on 7.3.6 but you can click on adv control i see.


----------



## bfromcolo

Quote:


> Originally Posted by *Bal3Wolf*
> 
> any one seeing alot of crashes of fah core or client forgot which it said i ran the old 7662 without a issue thru the chimp but i loaded it up today for the fat and has had it crash twice and just sit at 0% using the 13.5b2 on my 2x 7970s.


I've had a pretty steady diet of 17s on my 7850, getting 47k at 1050/1250 running the 13.4 WHQL drivers at stock voltages. I was getting some errors at 1300 memory clock, didn't abort the unit but reworked some stuff. But I think I have had all 7663s.


----------



## DizZz

Quote:


> Originally Posted by *Bal3Wolf*
> 
> kool will give that a try and see if it will stop crashing and just doing nothing lol.
> 
> still uses web brower on 7.3.6 but you can click on adv control i see.


Yeah I guess you have to use web control to setup an identity but after that, just use FAHControl


----------



## Bal3Wolf

7.3.6 seems to run better haset crashed any.


----------



## Krusher33

Ever since I went from 13.4 to 13.5b2, instead of crashes I'm getting failed units. I'm just going to follow behind and upgrade my client as well.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Krusher33*
> 
> Ever since I went from 13.4 to 13.5b2, instead of crashes I'm getting failed units. I'm just going to follow behind and upgrade my client as well.


yea i was getting a popup window saying core or client had crashed then it would sit idle till i end tasked everything and reloaded folding.


----------



## [CyGnus]

Never got a failed unit yet and i bumped the core +25MHz these ZETA cores sure are amazing


----------



## Anthony20022

I was barely able to get 1GHz stable on 7662, but on 7663 I can increase it to 1050MHz


----------



## Krusher33

Quote:


> Originally Posted by *Bal3Wolf*
> 
> Quote:
> 
> 
> 
> Originally Posted by *DizZz*
> 
> 7.3.6 (which is the newest) doesn't use a browser to control anything. You still have FAHControl.
> 
> 
> 
> kool will give that a try and see if it will stop crashing and just doing nothing lol.
> 
> still uses web brower on 7.3.6 but you can click on adv control i see.
Click to expand...

OMG it's driving me nuts!


----------



## Bal3Wolf

i dont like 7.3.6 either but its not crashing now been running problem free sence i upgraded to it.


----------



## Hemi177

I was getting failed units on 7.3.6 on my 7950. So I grabbed an exe I had of 7.1.52, and it's been going a lot better so far. Cross my fingers


----------



## scubadiver59

Quote:


> Originally Posted by *HemiRoR*
> 
> I was getting failed units on 7.3.6 on my 7950. So I grabbed an exe I had of 7.1.52, and it's been going a lot better so far. Cross my fingers


I also got a few failed units...three on stock clocked 7950s (two on one card and one on the other).

I think that two of the failures were due to the P11292's that I got--it looks like they aborted--but I'm not sure of the third failure.

I have the up-to-date beta drivers and running v7 [email protected] client.

My cards are currently doing 77.8k PPD each on P7663s.









I won't even go into the stock clocked AMD 8350 that's only pumping out 7k PPD (6 cores) on a P8566...that's a whole other issue


----------



## [CyGnus]

7.2.9 here all good


----------



## tictoc

I am running the latest beta driver and [email protected] Control 7.2.9. I have been back folding for the last two days.

No issues here, and I was able to bump my OC to 1200/1650.


----------



## DizZz

I'm running 7.3.6 and my 7970s are at 1280mhz and are getting ~135k ppd


----------



## cam51037

Quote:


> Originally Posted by *scubadiver59*
> 
> I also got a few failed units...three on stock clocked 7950s (two on one card and one on the other).
> 
> I think that two of the failures were due to the P11292's that I got--it looks like they aborted--but I'm not sure of the third failure.
> 
> I have the up-to-date beta drivers and running v7 [email protected] client.
> 
> My cards are currently doing 77.8k PPD each on P7663s.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I won't even go into the stock clocked AMD 8350 that's only pumping out 7k PPD (6 cores) on a P8566...that's a whole other issue


That's about exactly what my 3570k does @ 4.4 GHz. 10k is the most I've seen from it folding 3 cores (about equal to 6 8350 cores IMO).

Maybe if you want to try Ubuntu folding on it? But that's a whole other can of worms you might not want to get into with GPU folding.


----------



## ericeod

I spent a week dialing in an OC on my 7970. I've been running 13.4 drivers with the latest 7.3.6 [email protected] client with 1180/1650 OC and I've only seen one core 16 over the past few weeks. I've been getting 90% core 17 WUs. The only time I really see core 16s is when I've just installed the client. Once I add the beta tag, and the first core 16 completes, I get 17s after that.

Been going strong folding 24/7 with main rig and the 7770 in my HTPC. Surprisingly, the 7770 gets me 24k ppd.


----------



## Asustweaker

Any word out there on the linux native gpu core?


----------



## proteneer

Linux core should be working


----------



## scubadiver59

Are we finally saying "good-bye" to WinDoze???!!!

WooHoo!!!


----------



## gboeds

Quote:


> Originally Posted by *Asustweaker*
> 
> Any word out there on the linux native gpu core?


Quote:


> Originally Posted by *proteneer*
> 
> Linux core should be working


Quote:


> Originally Posted by *scubadiver59*
> 
> Are we finally saying "good-bye" to WinDoze???!!!
> 
> WooHoo!!!


so, who has a link to good linux GPU overclocking SW?


----------



## bfromcolo

Wah! Core 16 strikes again, from 47k to 6k PPD. I hope this is not a sign of things to come. I'm just a few days away from a million with 17s, or a few weeks...


----------



## scubadiver59

The FIRST post in this thread...make sure you have the arguments set in right to get the 17's!


Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *mmonnin*
> 
> Subject: New GPU Core17 Project 7661
> AMD cards do not work on XP.
> 
> If you are using v7.2.x you will need to add this flag along with beta. Replace ati with nvidia depending on your card. It
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> <extra-core-args>-gpu-vendor=ati</extra-core-args>
> 
> This is not needed for 7.3.6
> 
> This is how it will look when adding each flag on the left and together in the GPU slot on the right:
> 
> 
> I also show 1650 points in HFM, its acually 1600. FAHControl is correct.
> 
> Edit: Also expect 1 CPU core usage for nvidia users. See quote for AMD driver suggestions for lower CPU usage:
> Expect dips in GPU utilization around frames.
> 
> Edit2: If you were running beta with 7.2.9 there was a period that if you didn't have the extra core args entered every WU would fail. 7.3.6 is ok.
> 
> Edit3: Oh there and is some QRB
> 
> 
> 
> 
> 
> 
> 
> Doesn't seem like the extent of the last GPU QRB. Just puts it in like with other nvidia cores for me.
> 
> Edit4: HFM won't show PPD as the % goes by 2% in the log file. It will only show progress and be yellow.


----------



## tictoc

Quote:


> Originally Posted by *bfromcolo*
> 
> 
> 
> 
> 
> 
> 
> 
> Wah! Core 16 strikes again, from 47k to 6k PPD. I hope this is not a sign of things to come. I'm just a few days away from a million with 17s, or a few weeks...


I still get about one core_16 WU per day. It is annoying to see my card running at 60% for 7 hours to complete 1 WU.

Hopefully core_17 goes live soon. The performance of the core_16 WU's is atrocious on the latest drivers.


----------



## gofwar

will this improve the fps in gaming


----------



## tictoc

Quote:


> Originally Posted by *gboeds*
> 
> so, who has a link to good linux GPU overclocking SW?


I have not tried this tool out, but it seems to be the only working AMD overclocking utility around: AMDOverdriveCtrl


----------



## bfromcolo

Quote:


> Originally Posted by *tictoc*
> 
> I have not tried this tool out, but it seems to be the only working AMD overclocking utility around: AMDOverdriveCtrl


Isn't the Linux GPU folding limited to NVIDIA currently?


----------



## DizZz

Quote:


> Originally Posted by *bfromcolo*
> 
> Isn't the Linux GPU folding limited to NVIDIA currently?


Yes at the moment but amd support should be released soon when amd fixes their linux drivers


----------



## tictoc

Quote:


> Originally Posted by *bfromcolo*
> 
> Isn't the Linux GPU folding limited to NVIDIA currently?


Yes it is. I guess that the AMD Linux OpenCl drivers are still lacking, and it looks like AMD Linux folding won't be happening in the immediate future.



It is rather ironic, since AMD is always touting open platforms, that their drivers for Linux are less stable than NVIDIA's.


----------



## Krusher33

The demand is not there. I'm sure if everyone starts sending polite emails asking for linux drivers it'll happen sooner. Especially since there's steam for linux now.


----------



## Asustweaker

OK, so it's clear the Nidia core works in linux. Is it limited to core17 beta wu's?

My second rig is at my shop now, so i can fold on it for free. But it only has a gtx460 in it. So I would only wanna fold in linux via core15's


----------



## labnjab

How do you install the nvidia drivers in linux? I downloaded the file but it ask for a program to run it


----------



## tictoc

Quote:


> Originally Posted by *labnjab*
> 
> How do you install the nvidia drivers in linux? I downloaded the file but it ask for a program to run it


This seems to be the best guide out there.

How to install Nvidia drivers in Ubuntu

I do not have a working Nvidia GPU to test with, so hopefully someone on Linux will post their results.


----------



## Krusher33

Are we out of units? Mine isn't downloading a new one.


----------



## tictoc

Quote:


> Originally Posted by *Krusher33*
> 
> Are we out of units? Mine isn't downloading a new one.


I just fired up my 6870 and 5770's to see what PPD I could get with core_17, and I got nothing but core_17 WU's.


----------



## Anthony20022

Quote:


> Originally Posted by *Krusher33*
> 
> Are we out of units? Mine isn't downloading a new one.


I seem to still be getting them..


----------



## Krusher33

Well crap o la. I had taken off the beta flag while troubleshooting and it downloaded a core 16 right away.







Guess I'll have to wait till it's done before I play with it some more.


----------



## bfromcolo

Just got my first 8900 x17 work unit. Dropped my PPD to 42K, compared to 47-48K with the 7663s. Are these new?


----------



## cam51037

Quote:


> Originally Posted by *bfromcolo*
> 
> Just got my first 8900 x17 work unit. Dropped my PPD to 42K, compared to 47-48K with the 7663s. Are these new?


Yeah they are, and they favor NVIDIA over AMD it seems.


----------



## Anthony20022

Quote:


> Originally Posted by *bfromcolo*
> 
> Just got my first 8900 x17 work unit. Dropped my PPD to 42K, compared to 47-48K with the 7663s. Are these new?


Yes, they seem to have just released. My PPD dropped also with these, see this thread: http://www.overclock.net/t/1395369/project-8900-core-17-work-unit


----------



## tictoc

*Anthony beat me to it.*









They are a new larger WU. Project 8900 core 17

From the limited data so far it looks like a decrease in PPD compared to the 7663 units, but it is a bigger unit with longer frame times and a higher base points.


----------



## Asustweaker

so any word on running native linux gpu for nvidia core 15's??


----------



## tictoc

I do not think they are going to port the core_15 to Linux. I think the plan is to move all GPUs to the core_17 WU.

http://proteneer.com/blog/?p=1860

There were also some posts about Linux GPU folding a few days ago in this thread.


----------



## Asustweaker

Quote:


> Originally Posted by *tictoc*
> 
> I do not think they are going to port the core_15 to Linux. I think the plan is to move all GPUs to the core_17 WU.
> 
> http://proteneer.com/blog/?p=1860
> 
> There were also some posts about Linux GPU folding a few days ago in this thread.


ok, cool. I'm just curious on wether or not the gtx460 will be able to handle the core17 in linux much better?

In windows they took a good 15% ppd hit over the core15's.

Anyone got this running on a similar card yet in native linux?


----------



## Asustweaker

Quote:


> Originally Posted by *gboeds*
> 
> so, who has a link to good linux GPU overclocking SW?


I don't know about the kepler and titan over clocking software. I do know that "coolbits" in the xorg .conf file does not allow for fermi oc'ing. I personally used a fermi bios editor to apply bios level overclock. This is something i would not recommend for the novice user though.


----------



## joker927

Quote:


> Originally Posted by *AndyE*
> 
> The "Green" system


Wow! This system deserves props until the day that props end! (... Or until GPUs get faster


----------



## nagle3092




----------



## aas88keyz

Quote:


> Originally Posted by *nagle3092*


Yeah one time FAH estimated 300k ppd on one GTX 560 Ti 448 until FAH was able to fold a few segments and the ppd settled back down the realistic numbers for this card. Happens.

Keep on foldin'!


----------



## nagle3092

Quote:


> Originally Posted by *aas88keyz*
> 
> Yeah one time FAH estimated 300k ppd on one GTX 560 Ti 448 until FAH was able to fold a few segments and the ppd settled back down the realistic numbers for this card. Happens.
> 
> Keep on foldin'!


Two titans, been folding overnight and still reads 408412ppd.


----------



## tictoc

Quote:


> Originally Posted by *aas88keyz*
> 
> Yeah one time FAH estimated 300k ppd on one GTX 560 Ti 448 until FAH was able to fold a few segments and the ppd settled back down the realistic numbers for this card. Happens.
> 
> Keep on foldin'!


Those are realistic numbers for a 2xTitans on the 8900 WU's. NVIDIA cards get better PPD on the 8900 WU's than AMD cards. Project 8900


----------



## aas88keyz

Quote:


> Originally Posted by *nagle3092*
> 
> Quote:
> 
> 
> 
> Originally Posted by *aas88keyz*
> 
> Yeah one time FAH estimated 300k ppd on one GTX 560 Ti 448 until FAH was able to fold a few segments and the ppd settled back down the realistic numbers for this card. Happens.
> 
> Keep on foldin'!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Two titans, been folding overnight and still reads 408412ppd.
Click to expand...

Quote:


> Originally Posted by *tictoc*
> 
> Quote:
> 
> 
> 
> Originally Posted by *aas88keyz*
> 
> Yeah one time FAH estimated 300k ppd on one GTX 560 Ti 448 until FAH was able to fold a few segments and the ppd settled back down the realistic numbers for this card. Happens.
> 
> Keep on foldin'!
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Those are realistic numbers for a 2xTitans on the 8900 WU's. NVIDIA cards get better PPD on the 8900 WU's than AMD cards. Project 8900
Click to expand...

Ahh that makes sense. I thought I was looking for the sig rig but couldn't find it. I must have been looking at a different signature from another post. My apologies. Looking good!


----------



## bfromcolo

Are we done with 7663s? They got better points than the 8900 for me, which is what I have been getting lately.


----------



## DizZz

Quote:


> Originally Posted by *bfromcolo*
> 
> Are we done with 7663s? They got better points than the 8900 for me, which is what I have been getting lately.


I haven't gotten a 7663 in 3 days so I'm not sure. I got about 10k ppd more on them with my 7970s than with these dumb, long 8900s. On a side note, does anyone have a 690 they're folding on? I'm curious if it gets more or less ppd compared to a titan on these new wu's.


----------



## aas88keyz

same about of ppd for my two cards that I got from 7663's for me. Just a larger wu for me.


----------



## InsideJob

My client is hung at ready and work status says update core at the start of a new 8900 unit... Help please.


----------



## STW1911

Hi everybody. I recently just started folding on my newish rig for OCN team. It's been a little over a month now. I started folding just before the core 17 wu's came out, and have been folding almost 24/7 on CPU and GPU ever since. If it wasn't for the core 17 wu's, I wouldn't have the little over 1.5 million points that I have now. My question is this, after I accumulated enough points for the team, I got my FAH postbit wich I think is pretty cool. Now I look at my profile and at my recent posts, and the postbit is gone. Does anybody know why it is gone, or who I can talk to to get this fixed. Your help would be appreciated.


----------



## anubis1127

Quote:


> Originally Posted by *InsideJob*
> 
> My client is hung at ready and work status says update core at the start of a new 8900 unit... Help please.


I haven't heard of that one yet, but haven't been around, or folding on GPUs lately, sorry. Maybe try closing the client, clear out the work folder, and start it back up.

Quote:


> Originally Posted by *STW1911*
> 
> Hi everybody. I recently just started folding on my newish rig for OCN team. It's been a little over a month now. I started folding just before the core 17 wu's came out, and have been folding almost 24/7 on CPU and GPU ever since. If it wasn't for the core 17 wu's, I wouldn't have the little over 1.5 million points that I have now. My question is this, after I accumulated enough points for the team, I got my FAH postbit wich I think is pretty cool. Now I look at my profile and at my recent posts, and the postbit is gone. Does anybody know why it is gone, or who I can talk to to get this fixed. Your help would be appreciated.


I'll check it out, I know they disappear after a while of folding inactivity, but if you've been active it should stay there.


----------



## DizZz

Yeah anubis it doesn't look like they've updated in about 3 or 4 days


----------



## anubis1127

Quote:


> Originally Posted by *DizZz*
> 
> Yeah anubis it doesn't look like they've updated in about 3 or 4 days


Ah, points, and team rank aren't going up either? I just noticed about six guys (maybe more) that are active folders were changed to lapsed status on June 1st, so that's why some postbits disappeared.

I'm going to guess these issues are related.


----------



## DizZz

Quote:


> Originally Posted by *anubis1127*
> 
> Ah, points, and team rank aren't going up either? I just noticed about six guys (maybe more) that are active folders were changed to lapsed status on June 1st, so that's why some postbits disappeared.


Yeah it seems like it stopped updating on the May 31st because it says I already have 2.5m this month


----------



## nagle3092

Still going strong, at least these gpu blocks are justified now...


----------



## WLL77

Quote:


> Originally Posted by *DizZz*
> 
> Yeah it seems like it stopped updating on the May 31st because it says I already have 2.5m this month


Just to piggy back on this my stats on OCN still say I am at 475, when I am actually at 425,, so yea, no movement.

On to core_17,,,,,,,7870 has been running flawlessly with mild overclock averaging about 55k, on 8900wu. Kinda miss ole 7331 simply cause it finished in like 4 hours.


----------



## Rebelord

Sup guys. Been out of the OCN world for a few months. Have had the rig folding recently.

Still on 13.1 drivers for my 7950. Any word on with drivers to use for the core-17 WU? I do have the flags set as in the 1st post. [email protected] 7.3.6

Thanks!


----------



## tictoc

I was running core_17 on 13.4 and 13.5 beta 2 with no problems. The official recommendation is the latest WHQL(13.4) driver from AMD.
I haven't tried folding with the newest beta (13.6), but it should work fine since the 13.6 beta just adds new features and fixes bugs from 13.5 beta2/13.4.


----------



## tictoc

*Delete. Double post.


----------



## WiSK

Quote:


> Originally Posted by *InsideJob*
> 
> My client is hung at ready and work status says update core at the start of a new 8900 unit... Help please.


I have that sometimes when my anti-virus don't trust a new update of a FaH core. Not sure why the Pande group likes to put executable files in the App*Data* folder.


----------



## Bal3Wolf

just started core17 up from a few weeks off and got a 8900 work unit base credit 6000 points estmated credit 30k 6hrs left.


----------



## nagle3092

]

Just made the top 20, should be able to make the top 10. Not sure about the top 5 though...


----------



## anubis1127

Quote:


> Originally Posted by *nagle3092*
> 
> Just made the top 20, should be able to make the top 10. Not sure about the top 5 though...


Congrats!


----------



## InsideJob

Quote:


> Originally Posted by *WiSK*
> 
> I have that sometimes when my anti-virus don't trust a new update of a FaH core. Not sure why the Pande group likes to put executable files in the App*Data* folder.


Yeah I did everything I could think of and nothing had worked. Then I came back to my computer after being out and about the other day and it was just randomly up and running just fine on the 8900 unit... Strange, hope it doesn't happen again. It sat at that "update core" state for almost 48 hours.


----------



## Bal3Wolf

anyone else notice these new 8900 work units seem to generate more heat.


----------



## InsideJob

Not overly more heat, my 7970 with the reference blower style cooler would sit around 55-60°c while folding the old units. It's sitting at 61°c on the 8900 unit.


----------



## NBrock

Quote:


> Originally Posted by *nagle3092*
> 
> ]
> 
> Just made the top 20, should be able to make the top 10. Not sure about the top 5 though...


SWEET I AM # 18 and didn't even know it!!


----------



## scubadiver59

NVM


----------



## neurotix

Sapphire 7970 Vapor-X 1200/1600mhz


----------



## snipekill2445

I'm getting roughly 90k PPD from my reference 7970 on stock settings.

Once I get the rest of my W/C gear, I'll be able to overclock this beast.


----------



## scubadiver59

Quote:


> Originally Posted by *snipekill2445*
> 
> *I'm getting roughly 90k PPD from my reference 7970 on stock settings*.
> 
> Once I get the rest of my W/C gear, I'll be able to overclock this beast.


Hmmm...I was getting 83.5K PPD on my stock 7950s


----------



## snipekill2445




----------



## Gungnir

Quote:


> Originally Posted by *snipekill2445*
> 
> (Image)


That explains it; project 8900 isn't as good on AMD cards as 7663 is. On a project 7663 WU, you should get ~110k PPD, IIRC.


----------



## Bal3Wolf

Quote:


> Originally Posted by *Gungnir*
> 
> That explains it; project 8900 isn't as good on AMD cards as 7663 is. On a project 7663 WU, you should get ~110k PPD, IIRC.


id disagree im seeing 115k from my [email protected] on 8900 around the same for 7663 maybe 5k more at best but i also ran my cards at 1150mhz when i was doing the 7663. Both my [email protected] were showing 30k+ for each work unit around 6hrs and 5 mins each.


----------



## Anthony20022

Quote:


> Originally Posted by *Bal3Wolf*
> 
> id disagree im seeing 115k from my [email protected] on 8900 around the same for 7663 maybe 5k more at best but i also ran my cards at 1150mhz when i was doing the 7663. Both my [email protected] were showing 30k+ for each work unit around 6hrs and 5 mins each.


Thats strange, my 7950 @1050 is getting at least 5K PPD less on 8900 than 7663.


----------



## gboeds

7970 @ 1200/1500

7663: Avg. Time / Frame : 00:01:16 - 124,561.0 PPD

8900: Avg. Time / Frame : 00:03:38 - 121,621.2 PPD


----------



## snipekill2445

I just got a call, and I'm getting more work.

WATERLOOP! Here I come!

Can't wait to get this thang overclocked!


----------



## [CyGnus]

I am getting 127/129K PPD on these P8900 7970 @ 1200/1600 with 13.6b


----------



## labnjab

We have some cooler weather this weekend due to the tropical storm, so I decided to fire my gpus up again until it gets hot again. All 3 cards picked up 8900.

My 670 Ftw loves it and is getting almost 75k ppd, but i'm getting mixed results from my 2 570 classifieds in my main rig. Both classified are clocked the same and both have a cpu core free each, but one card is getting much less ppd. One is at 37k ppd but the other is only at 26k ppd. Any idea as to what would cause this point spread between the 2 cards?


----------



## anubis1127

Quote:


> Originally Posted by *labnjab*
> 
> We have some cooler weather this weekend due to the tropical storm, so I decided to fire my gpus up again until it gets hot again. All 3 cards picked up 8900.
> 
> My 670 Ftw loves it and is getting almost 75k ppd, but i'm getting mixed results from my 2 570 classifieds in my main rig. Both classified are clocked the same and both have a cpu core free each, but one card is getting much less ppd. One is at 37k ppd but the other is only at 26k ppd. *Any idea as to what would cause this point spread between the 2 cards?*


Possibly different Run, Clone, Gen on same WU could have varying results. Not sure though.


----------



## Agent_kenshin

Quote:


> Originally Posted by *labnjab*
> 
> My 670 Ftw loves it and is getting almost 75k ppd, but i'm getting mixed results from my 2 570 classifieds in my main rig. Both classified are clocked the same and both have a cpu core free each, but one card is getting much less ppd. One is at 37k ppd but the other is only at 26k ppd. Any idea as to what would cause this point spread between the 2 cards?


Got my first 8900 last week and I have been having issues just getting it going on my 570 GTX which runs at 920 for folding. It would not even start and errors would return back as a bad core after about 30 minutes. I was having this problem even when i returned my card to stock clocks and went as far as deleting the core and having the client re download it, nothing worked . I gave up on it and returned back to my console client which gives core 15's and all is well.

Today I decided to give these core 17's another try hoping that I would get those wonderful 7663's which were giving me around 45-48K but guess what, another 8900







This time they got going (took awhile) and the first 2 got to 1% then failed with "bad core" This time i scaled my clocks to 860 and it took about 20 minutes for it to get to 1% but I checked on it a couple hours later the TPF was at 7:48. As i am posting it is at 96% complete now and i looked in the log for any errors and there has been a couple of bad states detected through out the run.

I would like to hear what other Fermi users are experiencing with this WU. I think that 8900 seems to be unstable on Fermi compared to the 7660/7663. My GPU tends to bounce between 96-99% The ForceWare version that I am using is 314.22 and the V7 client is at 7.2.9

HFM is reading the PPD as 38K for me.


----------



## labnjab

One of my 570s runs great on 8900 but the other doesn't. I stopped folding on my 2nd 570 last night. It kept failing 8900s no mater what clock speed so after the 4th fail I just shut it down. All the errors were bad unit errors so I'm not sure whats going on. The weather is warming up again so today is my last day folding on my gpus until the next fat anyways


----------



## nagle3092

11th Place







, should have 8th by the end of today.


----------



## anubis1127

I'm finally in the top 20, LOL.


----------



## Bal3Wolf

Quote:


> Originally Posted by *anubis1127*
> 
> I'm finally in the top 20, LOL.


i could be top 10 but trying to split up power between boinc,folding and mining is hard lol and keep power bill low and less heat in my room.


----------



## anubis1127

Quote:


> Originally Posted by *Bal3Wolf*
> 
> i could be top 10 but trying to split up power between boinc,folding and mining is hard lol *and keep power bill low* and less heat in my room.


That sounds like a nearly impossible task! I just fold on my Intel 2P to try to keep power bill low, sold all my GPUs, so no BOINC, or mining for now.


----------



## martinhal

Quote:


> Originally Posted by *labnjab*
> 
> One of my 570s runs great on 8900 but the other doesn't. I stopped folding on my 2nd 570 last night. It kept failing 8900s no mater what clock speed so after the 4th fail I just shut it down. All the errors were bad unit errors so I'm not sure whats going on. The weather is warming up again so today is my last day folding on my gpus until the next fat anyways


I had that too , tried to fold on a GTX 550 Ti and kept getting errors


----------



## Agent_kenshin

After i completed my first 8900 on my 570GTX [email protected] I decided to see if I could pull off another one but I woke up to this

12:40:01:WU00:FS00:0x17:Completed 375000 out of 2500000 steps (15%)
12:47:48:WU00:FS00:0x17:Completed 400000 out of 2500000 steps (16%)
12:55:52:WU00:FS00:0x17:Completed 425000 out of 2500000 steps (17%)
12:56:42:WU00:FS00:0x17:ERROR:exception: Error downloading array energyBuffer: clEnqueueReadBuffer (-36)
12:56:42:WU00:FS00:0x17:Saving result file logfile_01.txt
12:56:42:WU00:FS00:0x17:Saving result file log.txt
12:56:42:WU00:FS00:0x17:[email protected] Core Shutdown: BAD_WORK_UNIT
12:56:42:WARNING:WU00:FS00:FahCore returned: BAD_WORK_UNIT (114 = 0x72)
12:56:42:WU00:FS00:Sending unit results: id:00 state:SEND error:FAULTY project:8900 run:697 clone:0 gen:18 core:0x17 unit:0x00000015028c126651a6c1a65fec4877

I will try going back to stock clocks but I had issues getting this WU getting past 1% before it would crap out or maybe a different driver.

Are people still getting those 7663's? or are they all done?


----------



## rubixcube101

Hey guys how do i get my gpu usage to stay constant? Its varying between 30-50%. Not sure why. Cheers


----------



## snipekill2445

How hot does it run under load?


----------



## rubixcube101

Quote:


> Originally Posted by *snipekill2445*
> 
> How hot does it run under load?


During gaming it runs just fine. 99% usage. Thats why im a bit confused. And it just started occuring after i downgraded to 7.2.9 to use the beta 17 cores, which seem to be working cause im getting more ppd even with half the gpu usage. Not to concerned because im on air with a reference 7970 so atleast i can have it going 24/7 now without having to put up with a loud fan to keep it cool. Would like to know though how to control it so then i can max it if i feel like it.


----------



## anubis1127

Quote:


> Originally Posted by *rubixcube101*
> 
> During gaming it runs just fine. 99% usage. Thats why im a bit confused. And it just started occuring after i downgraded to 7.2.9 to use the beta 17 cores, which seem to be working cause im getting more ppd even with half the gpu usage. Not to concerned because im on air with a reference 7970 so atleast i can have it going 24/7 now without having to put up with a loud fan to keep it cool. Would like to know though how to control it so then i can max it if i feel like it.


You don't need to downgrade to 7.2.9. Which Work Unit is it? Can you post the [email protected] log?


----------



## rubixcube101

I downgraded because the beta wasn't working on 7.3.6, but its working now on 7.2.9. Is the work unit the project?:11292

Heres the log. (Had to remove the "<" from the first few lines because it wouldnt show in the post otherwise.


Spoiler: Warning: Spoiler!



Saving configuration to config.xml
13:40:35:Saving configuration to config.xml
13:40:35:config>
13:40:35: !-- Folding Slot Configuration -->
13:40:35: client-type v='beta'/>
13:40:35: extra-core-args v='-gpu-vendor=ati'/>
13:40:35:
13:40:35: !-- Network -->
13:40:35: proxy v=':8080'/>
13:40:35:
13:40:35: !-- User Information -->
13:40:35: passkey v='********************************'/>
13:40:35: team v='37726'/>
13:40:35: user v='rubixcube101'/>
13:40:35:
13:40:35: !-- Folding Slots -->
13:40:35: slot id='0' type='GPU'/>
13:40:35: slot id='1' type='SMP'/>
13:40:35:/config>
13:40:50:WU01:FS01:0xa4:Completed 35000 out of 250000 steps (14%)
13:42:46:WU01:FS01:0xa4:Completed 37500 out of 250000 steps (15%)
13:43:54:WU00:FS00:0x16:Completed 1800000 out of 60000000 steps (3%).
13:44:45:WU01:FS01:0xa4:Completed 40000 out of 250000 steps (16%)
13:46:32:WU01:FS01:0xa4:Completed 42500 out of 250000 steps (17%)
13:48:12:WU01:FS01:0xa4:Completed 45000 out of 250000 steps (18%)
13:49:51:WU01:FS01:0xa4:Completed 47500 out of 250000 steps (19%)
13:51:32:WU01:FS01:0xa4:Completed 50000 out of 250000 steps (20%)
13:53:12:WU01:FS01:0xa4:Completed 52500 out of 250000 steps (21%)
13:54:54:WU01:FS01:0xa4:Completed 55000 out of 250000 steps (22%)
13:56:35:WU01:FS01:0xa4:Completed 57500 out of 250000 steps (23%)
13:58:17:WU01:FS01:0xa4:Completed 60000 out of 250000 steps (24%)
13:59:57:WU01:FS01:0xa4:Completed 62500 out of 250000 steps (25%)
14:00:33:WU00:FS00:0x16:Completed 2400000 out of 60000000 steps (4%).
14:01:38:WU01:FS01:0xa4:Completed 65000 out of 250000 steps (26%)
14:03:18:WU01:FS01:0xa4:Completed 67500 out of 250000 steps (27%)
14:04:58:WU01:FS01:0xa4:Completed 70000 out of 250000 steps (28%)
14:06:40:WU01:FS01:0xa4:Completed 72500 out of 250000 steps (29%)
14:08:30:WU01:FS01:0xa4:Completed 75000 out of 250000 steps (30%)
14:10:16:WU01:FS01:0xa4:Completed 77500 out of 250000 steps (31%)
14:12:01:WU01:FS01:0xa4:Completed 80000 out of 250000 steps (32%)
14:13:54:WU01:FS01:0xa4:Completed 82500 out of 250000 steps (33%)
14:14:45:WU00:FS00:0x16:Completed 3000000 out of 60000000 steps (5%).
14:15:39:WU01:FS01:0xa4:Completed 85000 out of 250000 steps (34%)
14:17:28:WU01:FS01:0xa4:Completed 87500 out of 250000 steps (35%)
14:19:25:WU01:FS01:0xa4:Completed 90000 out of 250000 steps (36%)
14:21:17:WU01:FS01:0xa4:Completed 92500 out of 250000 steps (37%)
14:23:00:WU01:FS01:0xa4:Completed 95000 out of 250000 steps (38%)
14:23:19:Server connection id=5 on 0.0.0.0:36330 from 127.0.0.1
14:23:30:Server connection id=6 on 0.0.0.0:36330 from 127.0.0.1
14:23:30:Server connection id=5 ended
14:23:39:Server connection id=6 ended
14:24:48:WU01:FS01:0xa4:Completed 97500 out of 250000 steps (39%)
14:26:33:WU01:FS01:0xa4:Completed 100000 out of 250000 steps (40%)
14:28:10:WU01:FS01:0xa4:Completed 102500 out of 250000 steps (41%)
14:28:35:WU00:FS00:0x16:Completed 3600000 out of 60000000 steps (6%).
14:29:47:WU01:FS01:0xa4:Completed 105000 out of 250000 steps (42%)
14:31:28:WU01:FS01:0xa4:Completed 107500 out of 250000 steps (43%)
14:33:06:WU01:FS01:0xa4:Completed 110000 out of 250000 steps (44%)
14:34:43:WU01:FS01:0xa4:Completed 112500 out of 250000 steps (45%)
14:36:20:WU01:FS01:0xa4:Completed 115000 out of 250000 steps (46%)
14:37:56:WU01:FS01:0xa4:Completed 117500 out of 250000 steps (47%)
14:39:33:WU01:FS01:0xa4:Completed 120000 out of 250000 steps (48%)
14:41:24:WU01:FS01:0xa4:Completed 122500 out of 250000 steps (49%)
14:43:10:WU01:FS01:0xa4:Completed 125000 out of 250000 steps (50%)
14:44:57:WU01:FS01:0xa4:Completed 127500 out of 250000 steps (51%)
14:45:42:WU00:FS00:0x16:Completed 4200000 out of 60000000 steps (7%).
14:46:45:WU01:FS01:0xa4:Completed 130000 out of 250000 steps (52%)


----------



## ZDngrfld

Quote:


> Originally Posted by *rubixcube101*
> 
> I downgraded because the beta wasn't working on 7.3.6, but its working now on 7.2.9. Is the work unit the project?:11292
> 
> Heres the log. (Had to remove the "<" from the first few lines because it wouldnt show in the post otherwise.


11292 is Core 16 and not a beta WU. P8900 is the beta WU.


----------



## anubis1127

Quote:


> Originally Posted by *rubixcube101*
> 
> I downgraded because the beta wasn't working on 7.3.6, but its working now on 7.2.9. Is the work unit the project?:11292
> 
> Heres the log. (Had to remove the "<" from the first few lines because it wouldnt show in the post otherwise.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> Saving configuration to config.xml
> 13:40:35:Saving configuration to config.xml
> 13:40:35:config>
> 13:40:35: !-- Folding Slot Configuration -->
> 13:40:35: client-type v='beta'/>
> 13:40:35: extra-core-args v='-gpu-vendor=ati'/>
> 13:40:35:
> 13:40:35: !-- Network -->
> 13:40:35: proxy v=':8080'/>
> 13:40:35:
> 13:40:35: !-- User Information -->
> 13:40:35: passkey v='********************************'/>
> 13:40:35: team v='37726'/>
> 13:40:35: user v='rubixcube101'/>
> 13:40:35:
> 13:40:35: !-- Folding Slots -->
> 13:40:35: slot id='0' type='GPU'/>
> 13:40:35: slot id='1' type='SMP'/>
> 13:40:35:/config>
> 13:40:50:WU01:FS01:0xa4:Completed 35000 out of 250000 steps (14%)
> 13:42:46:WU01:FS01:0xa4:Completed 37500 out of 250000 steps (15%)
> 13:43:54:WU00:FS00:0x16:Completed 1800000 out of 60000000 steps (3%).
> 13:44:45:WU01:FS01:0xa4:Completed 40000 out of 250000 steps (16%)
> 13:46:32:WU01:FS01:0xa4:Completed 42500 out of 250000 steps (17%)
> 13:48:12:WU01:FS01:0xa4:Completed 45000 out of 250000 steps (18%)
> 13:49:51:WU01:FS01:0xa4:Completed 47500 out of 250000 steps (19%)
> 13:51:32:WU01:FS01:0xa4:Completed 50000 out of 250000 steps (20%)
> 13:53:12:WU01:FS01:0xa4:Completed 52500 out of 250000 steps (21%)
> 13:54:54:WU01:FS01:0xa4:Completed 55000 out of 250000 steps (22%)
> 13:56:35:WU01:FS01:0xa4:Completed 57500 out of 250000 steps (23%)
> 13:58:17:WU01:FS01:0xa4:Completed 60000 out of 250000 steps (24%)
> 13:59:57:WU01:FS01:0xa4:Completed 62500 out of 250000 steps (25%)
> 14:00:33:WU00:FS00:0x16:Completed 2400000 out of 60000000 steps (4%).
> 14:01:38:WU01:FS01:0xa4:Completed 65000 out of 250000 steps (26%)
> 14:03:18:WU01:FS01:0xa4:Completed 67500 out of 250000 steps (27%)
> 14:04:58:WU01:FS01:0xa4:Completed 70000 out of 250000 steps (28%)
> 14:06:40:WU01:FS01:0xa4:Completed 72500 out of 250000 steps (29%)
> 14:08:30:WU01:FS01:0xa4:Completed 75000 out of 250000 steps (30%)
> 14:10:16:WU01:FS01:0xa4:Completed 77500 out of 250000 steps (31%)
> 14:12:01:WU01:FS01:0xa4:Completed 80000 out of 250000 steps (32%)
> 14:13:54:WU01:FS01:0xa4:Completed 82500 out of 250000 steps (33%)
> 14:14:45:WU00:FS00:0x16:Completed 3000000 out of 60000000 steps (5%).
> 14:15:39:WU01:FS01:0xa4:Completed 85000 out of 250000 steps (34%)
> 14:17:28:WU01:FS01:0xa4:Completed 87500 out of 250000 steps (35%)
> 14:19:25:WU01:FS01:0xa4:Completed 90000 out of 250000 steps (36%)
> 14:21:17:WU01:FS01:0xa4:Completed 92500 out of 250000 steps (37%)
> 14:23:00:WU01:FS01:0xa4:Completed 95000 out of 250000 steps (38%)
> 14:23:19:Server connection id=5 on 0.0.0.0:36330 from 127.0.0.1
> 14:23:30:Server connection id=6 on 0.0.0.0:36330 from 127.0.0.1
> 14:23:30:Server connection id=5 ended
> 14:23:39:Server connection id=6 ended
> 14:24:48:WU01:FS01:0xa4:Completed 97500 out of 250000 steps (39%)
> 14:26:33:WU01:FS01:0xa4:Completed 100000 out of 250000 steps (40%)
> 14:28:10:WU01:FS01:0xa4:Completed 102500 out of 250000 steps (41%)
> 14:28:35:WU00:FS00:0x16:Completed 3600000 out of 60000000 steps (6%).
> 14:29:47:WU01:FS01:0xa4:Completed 105000 out of 250000 steps (42%)
> 14:31:28:WU01:FS01:0xa4:Completed 107500 out of 250000 steps (43%)
> 14:33:06:WU01:FS01:0xa4:Completed 110000 out of 250000 steps (44%)
> 14:34:43:WU01:FS01:0xa4:Completed 112500 out of 250000 steps (45%)
> 14:36:20:WU01:FS01:0xa4:Completed 115000 out of 250000 steps (46%)
> 14:37:56:WU01:FS01:0xa4:Completed 117500 out of 250000 steps (47%)
> 14:39:33:WU01:FS01:0xa4:Completed 120000 out of 250000 steps (48%)
> 14:41:24:WU01:FS01:0xa4:Completed 122500 out of 250000 steps (49%)
> 14:43:10:WU01:FS01:0xa4:Completed 125000 out of 250000 steps (50%)
> 14:44:57:WU01:FS01:0xa4:Completed 127500 out of 250000 steps (51%)
> 14:45:42:WU00:FS00:0x16:Completed 4200000 out of 60000000 steps (7%).
> 14:46:45:WU01:FS01:0xa4:Completed 130000 out of 250000 steps (52%)


Yes, the project is the work unit. That one is not a Core 17 WU, it's the old Core 16 one. To fold core 16 you need either really old AMD drivers, like 12.4, or new ones that have been modded with the old SDK.

I suspect that after you downgraded to 7.2.9 the client started up before you could add the beta and gpu vender into your config, downloaded the core 16, and that's why you're seeing the low GPU usage.

At this point you'll just have to wait it out for the 11292 to finish, after that you will likely pick up a beta unit, and be at near 100% utilization.


----------



## rubixcube101

Oh ok then, ill see how it goes. Thanks guys.


----------



## Ribozyme

So I am going to buy a new graphics card in the coming weeks and with the beta 17 cores, which cards are now best in terms of PPD, AMD or Nvidia? I am debating between 670/680 or a 7950.


----------



## [CyGnus]

Why do you consider 670/680 and not a 770? With core 17 i think the 7950 would come out better then nvidia and you would have 3Gb / 384 bit vs 2Gb / 256bit


----------



## Ribozyme

Quote:


> Originally Posted by *[CyGnus]*
> 
> Why do you consider 670/680 and not a 770? With core 17 i think the 7950 would come out better then nvidia and you would have 3Gb / 384 bit vs 2Gb / 256bit


770 has a TDP of 230W against 180W for 680 and 170W for 670 and I only have a 400w PSU that's why. Could you downclock the 770 to consume exactly as much power as a 680? I guess you could.


----------



## anubis1127

Quote:


> Originally Posted by *Ribozyme*
> 
> 770 has a TDP of 230W against 180W for 680 and 170W for 670 *and I only have a 400w PSU* that's why. Could you downclock the 770 to consume exactly as much power as a 680? I guess you could.


In that case, I would go for the lowest TDP of them.


----------



## Ribozyme

Quote:


> Originally Posted by *anubis1127*
> 
> In that case, I would go for the lowest TDP of them.


Am searching for the cheapest 670 dual blower 670 s we speak







missed a deal on a gtx 670 msi PE though for 280 euro. But it is now 290 euro so still not bad, might go for it if I don't find anything cheaper. My biggest gripe with buying a 670 now is its resale value next year before 800 series launch. So now I am contemplating buying a 770 and downclocking it, running it for a year some occasional folding on it and sell it before 800 launch and then hope that 800 series has serious reductions in power consumption and buy the flagship and fold away 24/7. What do you think of that plan?


----------



## nagle3092

How are you guys pulling over 100k ppd with a 7970 on 8900? A guy at work has 3 pcs built for a media project and they each have a lightening, hes letting me fold on them until they get installed. The only thing is Im not even pulling 30K ppd on them, running 7.3.6 and the 13.6 beta.


----------



## anubis1127

Quote:


> Originally Posted by *nagle3092*
> 
> How are you guys pulling over 100k ppd with a 7970 on 8900? A guy at work has 3 pcs built for a media project and they each have a lightening, hes letting me fold on them until they get installed. The only thing is Im not even pulling 30K ppd on them, running 7.3.6 and the 13.6 beta.


I'd say double check the project. The core 16 WUs got ~9k PPD * 3 would be almost 30k ppd.


----------



## gboeds

Quote:


> Originally Posted by *nagle3092*
> 
> How are you guys pulling over 100k ppd with a 7970 on 8900? A guy at work has 3 pcs built for a media project and they each have a lightening, hes letting me fold on them until they get installed. The only thing is Im not even pulling 30K ppd on them, running 7.3.6 and the 13.6 beta.


Read wrong, nvm


----------



## nagle3092

Quote:


> Originally Posted by *anubis1127*
> 
> I'd say double check the project. The core 16 WUs got ~9k PPD * 3 would be almost 30k ppd.


Its deffinitly 8900, ppd as of now is 29362


----------



## [CyGnus]

add the beta flag and do some core 17's


----------



## nagle3092

Quote:


> Originally Posted by *[CyGnus]*
> 
> add the beta flag and do some core 17's


Already did, FahCore 0x17 project 8900. My main rig had a catastrophic failure yesterday so Im down 500K ppd until I get it back up. So Im trying to get these going in the meantime.


----------



## anubis1127

Quote:


> Originally Posted by *nagle3092*
> 
> Already did, FahCore 0x17 project 8900. My main rig had a catastrophic failure yesterday so Im down 500K ppd until I get it back up. So Im trying to get these going in the meantime.


Are they OC'd? I'm not sure, you shouldn't really have to do anything special. Maybe just let them sit, and fold for a while, and then check HFM if you have that setup. FAHControl is terribly inaccurate for PPD estimates.


----------



## nagle3092

Quote:


> Originally Posted by *anubis1127*
> 
> Are they OC'd? I'm not sure, you shouldn't really have to do anything special. Maybe just let them sit, and fold for a while, and then check HFM if you have that setup. FAHControl is terribly inaccurate for PPD estimates.


Not oc'd, wasnt sure about the setup since its amd. At this point im gonna just let them sit and hope that when I come in tomorrow they will be reporting over 100k.


----------



## jmrios82

Quote:


> Originally Posted by *nagle3092*
> 
> Not oc'd, wasnt sure about the setup since its amd. At this point im gonna just let them sit and hope that when I come in tomorrow they will be reporting over 100k.


Do you have a passkey? You need a passkey to get the QRB with core17. And fold some WU's before get the QRB as far as I know.
Here's a guide, there you can find how to get a passkey if you don't have one: http://www.overclock.net/t/977412/windows-7-complete-client-v7-guide
The client had some changes since that guide was published, if you need more help, just post it here, and I, or other users, could post some screenshots about how to add a passkey to your client.


----------



## nagle3092

Quote:


> Originally Posted by *jmrios82*
> 
> Do you have a passkey? You need a passkey to get the QRB with core17. And fold some WU's before get the QRB as far as I know.
> Here's a guide, there you can find how to get a passkey if you don't have one: http://www.overclock.net/t/977412/windows-7-complete-client-v7-guide
> The client had some changes since that guide was published, if you need more help, just post it here, and I, or other users, could post some screenshots about how to add a passkey to your client.


Yeah I put my passkey in, at home now so I'll see how it goes when I get back to work tomorrow.


----------



## DizZz

Is the cpu folding? I've found freeing up one thread per gpu increases the ppd as well.


----------



## anubis1127

Also, http://foldingforum.org/viewtopic.php?f=24&t=14714&sid=dd977740dc2b6f7b0e7b9e8968539a79&start=180#p244414

8900 is out of beta.


----------



## DEW21689

Just tried the beta cores again and wow I'm impressed.... I have a system with an I7 3770S (turbo disabled locked at 3.1GHz) and 2x7850s (overclocked to 1GHz).... My system is currently only pulling 230W and I'm getting JUST short of 94k ppd..... I'm just speechless at how these work units have turned my system (which was designed to be as energy efficient as I could possibly manage) into a folding monstrosity..

KEEP UP THE GOOD WORK!!!


----------



## anubis1127

Quote:


> Originally Posted by *DEW21689*
> 
> Just tried the beta cores again and wow I'm impressed.... I have a system with an I7 3770S (turbo disabled locked at 3.1GHz) and 2x7850s (overclocked to 1GHz).... My system is currently only pulling 230W and I'm getting JUST short of 94k ppd..... I'm just speechless at how these work units have turned my system (which was designed to be as energy efficient as I could possibly manage) into a folding monstrosity..
> 
> KEEP UP THE GOOD WORK!!!


That's awesome.


----------



## Hemi177

How is this even possible?


----------



## WiSK

Anyone got a project 8901 yet?


----------



## jmrios82

Quote:


> Originally Posted by *WiSK*
> 
> Anyone got a project 8901 yet?


I just recieved a 7660 (core 15) with the beta flag. Worst PPD on my GTX 780 since I received it.


----------



## WiSK

Yeah, I got a p7626 on my 660ti which is like half the normal ppd, and I got a p7625 on my 560ti. The 560ti got a couple of unstable machine errors, but once I went back to stock clocks it is now higher ppd than I would have had with p8900.

Anyway, I guess they are releasing p8901 soon because it was just added to the project list.


----------



## Hukkel

I am now also on a P7625. It is a core 15 with less than half my PPD with the core 17.

Who of you is stealing my core 17s?


----------



## WiSK

Quote:


> Originally Posted by *Hukkel*
> 
> Who of you is stealing my core 17s?




I'm not normally the type of person who uses meme generator, but I just couldn't help myself


----------



## anubis1127

xD


----------



## Hukkel




----------



## WiSK

Reading foldingforum and they are talking about core 17 now being "advanced" - does this mean we need to change settings from "client-type=beta" to "client-type=advanced"?

I will try it, but my p7626 is not finished for another 4 hours.


----------



## jmrios82

Quote:


> Originally Posted by *WiSK*
> 
> Reading foldingforum and they are talking about core 17 now being "advanced" - does this mean we need to change settings from "client-type=beta" to "client-type=advanced"?
> 
> I will try it, but my p7626 is not finished for another 4 hours.


I was looking for info on this, and yes, you need to change to "advanced" according to this:
http://folding.typepad.com/
I just set my client to advanced, and the core 15 WU that I've got, should be finished in 25 min, so, I should get a core 17 now. I will confirm the results.


----------



## Hukkel

Ok then. I also changed to advanced. Hope you are correct. My WU stll needs another 2 hours for a whopping 14k


----------



## jmrios82

Quote:


> Originally Posted by *Hukkel*
> 
> Ok then. I also changed to advanced. Hope you are correct. My WU stll needs another 2 hours for a whopping 14k


Another Core 15 for me under the advanced flag







, this time a 7624. Oh well..


----------



## Hukkel

POO!


----------



## WiSK

Maybe there is more than just p8900 in the pool of projects that are marked advanced. I'm going to keep an eye on the beta announcements thread on foldingforum to know when p8901 goes live.


----------



## bfromcolo

Just looking at these two topics on the folding forum, it would appear that 8900 is done and 8901 has not started, and there is no date for when this will change. Note the moderators closed both topics.

http://foldingforum.org/viewtopic.php?f=66&t=24488
http://foldingforum.org/viewtopic.php?f=66&t=24337&start=135

You can also go to the stats site and see that 8901 is not listed as an active project and the ZETA server that was handling out the 8900s has no work units.

Hopefully proteneer will come along and tell us when we will see core 17 advanced work units.


----------



## scubadiver59

Quote:


> Originally Posted by *jmrios82*
> 
> Another Core 15 for me under the advanced flag
> 
> 
> 
> 
> 
> 
> 
> , this time a 7624. Oh well..


Don't feel so bad...I have two 7950's folding two P11292s--Core 16s, and my two GTX-580s are folding P76xx's--Core 15s!


----------



## cam51037

Quote:


> Originally Posted by *WiSK*
> 
> 
> 
> I'm not normally the type of person who uses meme generator, but I just couldn't help myself


OMG. NEW. AVATAR. INCOMING.

eta 30 seconds!


----------



## scubadiver59

Quote:


> Originally Posted by *jmrios82*
> 
> I was looking for info on this, and yes, you need to change to "advanced" according to this:
> http://folding.typepad.com/
> I just set my client to advanced, and the core 15 WU that I've got, should be finished in 25 min, so, I should get a core 17 now. I will confirm the results.


TYVM!!!


----------



## RushiMP

Phew, just checked on my farm and its the same story there. Thought I had some something wrong and got banned from the beta WU.


----------



## [CyGnus]

This morning (9 am Lisbon/Portugal) i fired up the client and got a P8900 under the advanced flag so guess they are back


----------



## Hukkel

Quote:


> Originally Posted by *[CyGnus]*
> 
> This morning (9 am Lisbon/Portugal) i fired up the client and got a P8900 under the advanced flag so guess they are back


Just wanted to post this as well









Core 17 8900 under the advanced flag.


----------



## snipekill2445

Quote:


> Originally Posted by *Hukkel*
> 
> Just wanted to post this as well
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Core 17 8900 under the advanced flag.


Same story on my client, set it to advanced after the last (cruddy) WU finished, and I got another 8900 <3


----------



## scubadiver59

Quote:


> Originally Posted by *snipekill2445*
> 
> Same story on my client, set it to advanced after the last (cruddy) WU finished, and I got another 8900 <3


After my two Core-15s and two Core-16's, I got three Core-17 8900's (>20k credit) and another Core-15 (~14k credit).

My GTX-680, for TC folding, will be in tomorrow so I can see that that one does tomorrow night.


----------



## DEW21689

I've been getting some serious lag on my system when folding the core 17 project 8900 work units. When I move windows around it almost looks like I don't have my video drivers installed, anyone else experiencing similar symptoms? I'm running the 13.4 drivers for my cards (see sig rig)


----------



## [CyGnus]

install the 13.x http://www.guru3d.com/files_details/amd_catalyst_13_x_(13_150_1_june_21)_download.html


----------



## WiSK

Quote:


> Originally Posted by *DEW21689*
> 
> I've been getting some serious lag on my system when folding the core 17 project 8900 work units. When I move windows around it almost looks like I don't have my video drivers installed, anyone else experiencing similar symptoms? I'm running the 13.4 drivers for my cards (see sig rig)


Also make sure you don't have all cores folding CPU, try with 6 (configuration, slots, CPU, edit, cores=6, save) so there's a core free for sending instructions to each GPU.


----------



## DEW21689

Quote:


> Originally Posted by *[CyGnus]*
> 
> install the 13.x http://www.guru3d.com/files_details/amd_catalyst_13_x_(13_150_1_june_21)_download.html


I'll give it a try
Quote:


> Originally Posted by *WiSK*
> 
> Also make sure you don't have all cores folding CPU, try with 6 (configuration, slots, CPU, edit, cores=6, save) so there's a core free for sending instructions to each GPU.


This isn't the cause sadly, even if I'm not folding on my CPU AT ALL I still get the lag, also if I'm not folding on my CPU, my GPUs don't appear to be using the CPU like at all, it sits at 1-2% and idles down to 1.6GHz


----------



## anubis1127

Updated the thread title to reflect recent changes.


----------



## [CyGnus]

Quote:


> Originally Posted by *anubis1127*
> 
> Updated the thread title to reflect recent changes.


----------



## martinhal

For some reason they seem to be giving me better ppd


----------



## amang

OK I need some help on how to make good use of my 2 GTX Titan here.

I've got FAH client v7.3.6 up and running. I've set both my GPU slots to these flags:

Code:



Code:


<extra-core-args>-gpu-vendor=ati</extra-core-args>
<client-type>advanced</client-type>

Is there anything else that I have to add or tweak apart from the above?

Do I have to do anything with my CPU setting? Do I need to relieve 1 core for each GPU?

Sorry for all these questions, this FAH client v7 is way too advanced for me!


----------



## WLL77

Ok, have left my client on beta flag, picked up two openmm 11292wu's yesterday that killed my ppd. Got a zeta8900 this morning hopefully they are back.


----------



## WiSK

I'm trying to understand this, that you can pick up 8900s with both beta and advanced. Maybe it's because there is no other beta project available right now, so the client then defaults to picking advanced units?


----------



## WLL77

Amang


Spoiler: Warning: Spoiler!



Quote:


> Originally Posted by *amang*
> 
> OK I need some help on how to make good use of my 2 GTX Titan here.
> 
> I've got FAH client v7.3.6 up and running. I've set both my GPU slots to these flags:
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> <extra-core-args>-gpu-vendor=ati</extra-core-args>
> <client-type>advanced</client-type>
> 
> Is there anything else that I have to add or tweak apart from the above?
> 
> Do I have to do anything with my CPU setting? Do I need to relieve 1 core for each GPU?
> 
> Sorry for all these questions, this FAH client v7 is way too advanced for me!





I believe in 7.3 all you have to do is put "client-type" "beta" under the advanced option tag in slot config. And as far as I have read, you may want to set aside one cpu core per gpu, to insure they run smoothly.

Wisk:
I dunno,, saw the post about switching to advanced, but I figured I would just see what happens staying on beta,, it has happened before where units have temporarily run out, and then come back. Perhaps they still have a few 8900s on the beta server?


----------



## martinhal

I got core 16 on beta but got 4 core 17 with advanced flag. My gpu's are happy.


----------



## [CyGnus]

Quote:


> Originally Posted by *amang*
> 
> OK I need some help on how to make good use of my 2 GTX Titan here.
> 
> 
> Spoiler: Warning: Spoiler!
> 
> 
> 
> I've got FAH client v7.3.6 up and running. I've set both my GPU slots to these flags:
> 
> Code:
> 
> 
> 
> Code:
> 
> 
> <extra-core-args>-gpu-vendor=ati</extra-core-args>
> <client-type>advanced</client-type>
> 
> Is there anything else that I have to add or tweak apart from the above?
> 
> Do I have to do anything with my CPU setting? Do I need to relieve 1 core for each GPU?
> 
> Sorry for all these questions, this FAH client v7 is way too advanced for me!


I find client *7.2.9* better overall and these 13.x drivers are really good my 7970 @ 1200/1600 is doing 133K


----------



## gboeds

Quote:


> Originally Posted by *[CyGnus]*
> 
> [/SPOILER]
> 
> I find client *7.2.9* better overall and these 13.x drivers are really good my 7970 @ 1200/1600 is doing 133K


I would like that ppd....is there somewhere to get older versions of V7? All I can find is v6 or the latest v7....


----------



## DEW21689

Quote:


> Originally Posted by *gboeds*
> 
> I would like that ppd....is there somewhere to get older versions of V7? All I can find is v6 or the latest v7....


I have a copy of the 7.2.9 client installation, if you know of a site that doesn't require I create an account signing over my soul and first born child to upload it I will gladly do so.


----------



## [CyGnus]

gboeds try here: https://fah-web.stanford.edu/file-releases/beta/release/fah-installer/windows-2008-64bit/v7.2/


----------



## gboeds

Quote:


> Originally Posted by *[CyGnus]*
> 
> gboeds try here: https://fah-web.stanford.edu/file-releases/beta/release/fah-installer/windows-2008-64bit/v7.2/


thanks!


----------



## [CyGnus]




----------



## drnilly007

Quote:


> Originally Posted by *[CyGnus]*
> 
> [/SPOILER]
> 
> I find client *7.2.9* better overall and these 13.x drivers are really good my 7970 @ 1200/1600 is doing 133K


Since when can a 3570k and a 7970 get 133k ppd?


----------



## [CyGnus]

drnilly007 its only the 7970 that does those 133K i dont fold on the CPU since the Core 17 release


----------



## mrwesth

So I tried to run fahcore 17 on my 560ti 448 but haven't had success.

I added the flag client-type advanced and the core downloads just fine but the wu progress hangs at 0 percent. I updated drivers and tried different v7 client versions with no success. fahcore 17 is using ~8 percent cpu whilst no progress is showing in the v7 client.

Core 15 will run fine.



Any suggestions?


----------



## martinhal

How long are you letting it run ? I found I had to leave it abount 5 or minutes the first time before things started to look and work like normal.


----------



## mrwesth

Quote:


> Originally Posted by *martinhal*
> 
> How long are you letting it run ? I found I had to leave it abount 5 or minutes the first time before things started to look and work like normal.


Not very long--a few minutes.

That was pretty much the response I was hoping for before I get some zzz's. It's 330 in the am here.

I'll update in the morning if there isn't any progress.


----------



## martinhal

Good luck. Hope you wake up with some good points . Also just set GPU OC to stock for now to make sure its not an OC issue. I had to reduce a BF3 stable oc down by 50 mhz as core 17 is an oc killer.


----------



## mrwesth

Hard to sleep with this on my mind. Checked one more time -- 15min after wu downloaded and nothing has updated.


----------



## snipekill2445

Quote:


> Originally Posted by *martinhal*
> 
> How long are you letting it run ? I found I had to leave it abount 5 or minutes the first time before things started to look and work like normal.


Yep, same thing here. Showed 0% for about 10 minutes, then starting showing stats normally.


----------



## mrwesth

Hmm... well here's to hoping yall are right and I dun got all worked up over nothin'.
Maybe this WU just take a long time to complete/update.



YAY!

But... why is my gt 520 est ppd on same wu as high as the 560 and better than the [email protected]???
Sleepy time


----------



## anubis1127

Quote:


> Originally Posted by *mrwesth*
> 
> Hmm... well here's to hoping yall are right and I dun got all worked up over nothin'.
> Maybe this WU just take a long time to complete/update.
> 
> 
> 
> YAY!
> 
> But... why is my gt 520 est ppd on same wu as high as the 560 and better than the [email protected]???
> Sleepy time


Your PPD shouldn't be the same, that looks strange. I found on my 560ti 448 core that there wasn't much difference in PPD between core 15, and core 17. On either one I'd get right around ~30k PPD.


----------



## martinhal

Quote:


> Originally Posted by *anubis1127*
> 
> Your PPD shouldn't be the same, that looks strange. I found on my 560ti 448 core that there wasn't much difference in PPD between core 15, and core 17. On either one I'd get right around ~30k PPD.


This is normal . My 7770 and 7970 reported the same ppd at first . It will settle down after a while .


----------



## mrwesth

Quote:


> Originally Posted by *martinhal*
> 
> This is normal . My 7770 and 7970 reported the same ppd at first . It will settle down after a while .


Correct you are. All seems well now!

I was seeing ~22k ppd on the 560ti 448 now it is currently reporting 30k, will see if it hold over a week or so.


----------



## martinhal

Good to hear







Hope the WU's last.


----------



## El-Fuego

the 8900 is running on my CPU and now my gpu running 7809 with the advanced tag


----------



## scubadiver59

Quote:


> Originally Posted by *El-Fuego*
> 
> the 8900 is running on my CPU and now my gpu running 7809 with the advanced tag


Ouch!


----------



## $ilent

Is 8900 unit only for GPU folding or can CPU do it too? What flag is best for Linux folding 3770k? Thanks


----------



## El-Fuego

Quote:


> Originally Posted by *scubadiver59*
> 
> Ouch!


I know right ? i dont know why my pc pulled the heavy one on to the cpu?

also my cpu is set to -1, should i set it to something else to free a thread to help the gpu?, I heard that will help.


----------



## arvidab

Quote:


> Originally Posted by *$ilent*
> 
> Is 8900 unit only for GPU folding or can CPU do it too? What flag is best for Linux folding 3770k? Thanks


GPU only.

Running _client-type=beta_, _max-packet-size=small_ and _next-unit-percentage=100_ on mine. Has resulted in ~50k PPD according to TC stats.


----------



## $ilent

Do they still do those juicy 90,000ppd work units?


----------



## arvidab

Don't know, I had a 68k (P10083) in early June, had a 80k on in late March. Last couple of days, the best I've had was [email protected] PPD.


----------



## $ilent

I see, is there any plan to bring bigadv back? Can you do GPU folding in Linux yet either?


----------



## anubis1127

Quote:


> Originally Posted by *$ilent*
> 
> I see, is there any plan to bring bigadv back? Can you do GPU folding in Linux yet either?


bigadv never went anywhere. The deadlines were just changed so that single CPUs could no longer complete them on time, and thread count increased from 12 to 16. (You can still complete them on a heavily OC'd, 4.8ghz+, 3930k/3960x in native Linux, or so I've heard).

You can GPU fold on NV cards in Linux.


----------



## $ilent

oo any guide on how to setup nv gpu folding on linux please?


----------



## anubis1127

I could probably do up a guide this weekend after I get my new GPU.


----------



## $ilent

cheers i may aswell fold on my gtx 570 whilst folding on my 3770k in linux .


----------



## [CyGnus]

Guys i just got a 7870 on another RIG and i cant make HFM to show PPD... I have the psummaryC thing effective rate nothing works... but on my main Rig with the 7970 all is fine... what could it be?


----------



## arvidab

Quote:


> Originally Posted by *$ilent*
> 
> cheers i may aswell fold on my gtx 570 whilst folding on my 3770k in linux .


Just be prepared that the GPU takes a core due to Nv's OpenCL drivers.

Quote:


> Originally Posted by *[CyGnus]*
> 
> Guys i just got a 7870 on another RIG and i cant make HFM to show PPD... I have the psummaryC thing effective rate nothing works... but on my main Rig with the 7970 all is fine... what could it be?


How long did you wait? Can take a while before it starts showing.


----------



## [CyGnus]

The thing is it shows the TPF/ETA fine but not the PPD/Points in fahcontrol all is fine though


----------



## $ilent

Can anyone tell me how do I run nvidia GPU on Linux? I'm interested in getting it setup tonight. Anyone know PPD of a gtx 570?


----------



## anubis1127

Quote:


> Originally Posted by *$ilent*
> 
> Can anyone tell me how do I run nvidia GPU on Linux? I'm interested in getting it setup tonight. Anyone know PPD of a gtx 570?


I believe you just have to install the NV drivers and then fire up your client. Probably around 38-40k PPD if I had to guess, my gtx 560ti 448 core does 32k PPD, and that card is almost a 570.


----------



## $ilent

How would a Linux noon do that?









I could do with a 'type -download xyv' in terminal sort of guide


----------



## anubis1127

Quote:


> Originally Posted by *$ilent*
> 
> How would a Linux noon do that?
> 
> 
> 
> 
> 
> 
> 
> 
> 
> I could do with a 'type -download xyv' in terminal sort of guide


I haven't tried installing official NV drivers recently, so I'm not sure what to expect yet. After the WU I'm folding on my CPU finishes I'm going to install Ubuntu on my desktop, and see if I can't figure it out, and take some screenies for a possible guide.


----------



## $ilent

Thanks anu


----------



## anubis1127

Ok, got xubuntu installed, now to try folding.

Well that was even easier than I thought. I just used one of the x.org NV drivers supplied by Ubuntu, then added a GPU slot in FAHControl with the 'advanced' client-type flag on. Fired right up.


----------



## $ilent

What do I type into a terminal anubus to do that? Sorry Im crap with ubuntu.


----------



## [CyGnus]

hfm is acting on me it has all the info but the ppd... fahcontrol is working ok though, guess i will stick to that only


----------



## anubis1127

Quote:


> Originally Posted by *$ilent*
> 
> What do I type into a terminal anubus to do that? Sorry Im crap with ubuntu.


I didn't type anything into terminal, just found where the "additional drivers" were, and selected the latest NV one they had listed:



Then I rebooted, which I'm not sure I had to, but I did for good measure. Then I just added a GPU slot in FAHControl the same way you would in Windows.

Seems to be working well, 34k PPD stock clocks, which is slightly higher than what I was seeing in Windows.


----------



## $ilent

Guys how much ppd could a 7850 2GB card get from the best WUs? I Might get a 7850 they are relatively cheap at £160 with amd mega bundle.


----------



## bfromcolo

Quote:


> Originally Posted by *$ilent*
> 
> Guys how much ppd could a 7850 2GB card get from the best WUs? I Might get a 7850 they are relatively cheap at £160 with amd mega bundle.


My 7850 at 1050/1250 does about 42k PPD with the 8900, it was around 48k with the previous 17 WU. It's a 1G card but I don't think memory capacity is an issue for folding.


----------



## Akula

*Currently folding with 3 x GTX 680's*
Advanced - Flag

They are currently running in SLI - Slightly overclocked and all pulling the same WU & PPD
Although the PPD is wrong, estimated 28k per WU and receiving 10k.

Just wondering if their is anything i can do to optimize my setup for 3-Way SLI?


----------



## anubis1127

Quote:


> Originally Posted by *Akula*
> 
> *Currently folding with 3 x GTX 680's*
> Advanced - Flag
> 
> They are currently running in SLI - Slightly overclocked and all pulling the same WU & PPD
> Although the PPD is wrong, estimated 28k per WU and receiving 10k.
> 
> Just wondering if their is anything i can do to optimize my setup for 3-Way SLI?


Disable SLI before you start your client up.


----------



## scubadiver59

You should be getting ~90k PPD on a stock 680...and in your case, combined 270k PPD.

I myself never fold in SLI.

Are you running the "advance" flag? If not, load it and get one of the 8900's (though you can get this with the "beta" flag as well).


----------



## cam51037

Quote:


> Originally Posted by *Akula*
> 
> *Currently folding with 3 x GTX 680's*
> Advanced - Flag
> 
> They are currently running in SLI - Slightly overclocked and all pulling the same WU & PPD
> Although the PPD is wrong, estimated 28k per WU and receiving 10k.
> 
> Just wondering if their is anything i can do to optimize my setup for 3-Way SLI?


Turn SLI off and let them fold on their own, they should each be doing around 100k PPD on their own, so that's an easy 300k PPD.

Edit: ninjaed
Edit: ninjaed x2


----------



## [CyGnus]

core17 is in shortage or something?


----------



## WiSK

Quote:


> Originally Posted by *[CyGnus]*
> 
> core17 is in shortage or something?


I got a core15 just now as well if that's what you mean?
Quote:


> Originally Posted by *diwaker*
> New Core17 Project 8901 released for beta testing.


----------



## bfromcolo

From the folding forum announcement regarding 8901 work unit:

http://foldingforum.org/viewtopic.php?f=66&t=24514
Quote:


> This project will run on both ATI (HD 5000 or higher) and NVIDIA (Fermi or higher) clients.
> Please let me know if there are any issues. I would like beta testers to report the GPU temperature as well because these WUs have ~75000 atoms as compared to ~45000 atoms in p8900.


Does that mean that if it takes 12 hours for me to process an 8900 its going to take 20 hours to work a 8901, is that linear?

Will the advanced flag pick these up or do we have to go back to beta?


----------



## [CyGnus]

change to beta


----------



## $ilent

How about ppd on a 7870 tahiti LE guys?


----------



## [CyGnus]

Quote:


> Originally Posted by *$ilent*
> 
> How about ppd on a 7870 tahiti LE guys?


Also curious i have a 7870 (70K) and a 7970 (133K) the LE should be in the 80k maybe


----------



## gboeds

Quote:


> Originally Posted by *bfromcolo*
> 
> From the folding forum announcement regarding 8901 work unit:
> 
> http://foldingforum.org/viewtopic.php?f=66&t=24514
> Does that mean that if it takes 12 hours for me to process an 8900 its going to take 20 hours to work a 8901, is that linear?
> 
> Will the advanced flag pick these up or do we have to go back to beta?


Quote:


> Originally Posted by *[CyGnus]*
> 
> change to beta


anyone picking up these 8901s yet? all my GPUs are on the beta flag, AMD picking up core 16 and Nvidia picking up core 15


----------



## scubadiver59

Quote:


> Originally Posted by *gboeds*
> 
> anyone picking up these 8901s yet? all my GPUs are on the beta flag, AMD picking up core 16 and Nvidia picking up core 15


Was on straight 8900's until this last 7660


----------



## martinhal

just picket up two 8901 wu's . they look like heavy work.


----------



## $ilent

What's the PPD martin?


----------



## martinhal

Around 110 K ppd


----------



## $ilent

Which GPU?

Are these 8900 units permanent? I,e not temporary for a few week.


----------



## snipekill2445

Quote:


> Originally Posted by *martinhal*
> 
> just picket up two 8901 wu's . they look like heavy work.


Did you get it set to advanced or Beta?


----------



## martinhal

Quote:


> Originally Posted by *snipekill2445*
> 
> Did you get it set to advanced or Beta?


On the beta flag


----------



## WiSK

Quote:


> Originally Posted by *martinhal*
> 
> Around 110 K ppd


I see you in my threat list!


----------



## martinhal

Quote:


> Originally Posted by *WiSK*
> 
> I see you in my threat list!


Thanks to GPU folding







Fire up some more hardware, Im a week away


----------



## WiSK

Quote:


> Originally Posted by *martinhal*
> 
> Thanks to GPU folding
> 
> 
> 
> 
> 
> 
> 
> Fire up some more hardware, Im a week away


I would go out right now and buy another GPU, but my two cases are both mITX so all PCIe slots are filled...









(but it's a good excuse to maybe swap the GTX 560 Ti for a Kepler card)


----------



## Hukkel

Guys any reference times and PPD for a popular card running the new 8901?


----------



## WiSK

Quote:


> Originally Posted by *Hukkel*
> 
> Guys any reference times and PPD for a popular card running the new 8901?


Are you folding with your 680 or your 670 (or both)? I have a 660ti @ 1175MHz - is that a good reference for you? I will post TPF in a couple of hours when I hopefully pick up a p8901.


----------



## martinhal

Quote:


> Originally Posted by *Hukkel*
> 
> Guys any reference times and PPD for a popular card running the new 8901?


7970 97 % load 1250/1675 4:45 tpf 110 K ppd


----------



## Hukkel

Hmm I just received one myself on my GTX680 @1188 Mhz. It will run a good 9 hours and my PPD dropped from 90K to 82K compared to the 8900 WU. I guess I am moving back to advanced


----------



## martinhal

Quote:


> Originally Posted by *Hukkel*
> 
> Hmm I just received one myself on my GTX680 @1188 Mhz. It will run a good 9 hours and my PPD dropped from 90K to 82K compared to the 8900 WU. I guess I am moving back to advanced


Are you getting core 17 on advanced ? I only got core 16


----------



## gboeds

Picked up 3 8901s this morning, not liking the trend. Saw a slight drop in PPD from 7663 to 8900, now a more significant drop to 8901....

And here's hoping HFM is just confused, because it does not agree with V7!

HD7970 @ 1200/1600: 104,588 PPD (4:55 TPF) (HFM has it at 75k PPD....) This is down from solid 120kPPD on 8900

GTX 480 @ 800: 31,452 PPD (10:56 TPF)(HFM has it at 23,168 PPD...) My other 480 on same clock is running a 8900 for 35.6k PPD, V7 and HFM agree

GTX 460 @ 850: 15.4k PPD (17:37 TPF)(HFM says 11.4k PPD...) This is down from 17.2k PPD on 8900s


----------



## bfromcolo

I fired mine up with the advanced flag and got an 8900.

Quick question, the client lets you add both client-type advanced and beta at the same time if you want, whats the effect of having both? Does the order matter?


----------



## WiSK

Not happy with p8901. My 660Ti was giving me 75kppd on p8900, now down to 45kppd on p8901. I'll set client-type advanced again









Edit: FAHcontrol says 64kppd, HFM says 45kppd


----------



## 47 Knucklehead

Quote:


> Originally Posted by *anubis1127*
> 
> I didn't type anything into terminal, just found where the "additional drivers" were, and selected the latest NV one they had listed:
> 
> 
> 
> 
> Then I rebooted, which I'm not sure I had to, but I did for good measure. Then I just added a GPU slot in FAHControl the same way you would in Windows.
> 
> Seems to be working well, 34k PPD stock clocks, which is slightly higher than what I was seeing in Windows.


Hmmm, I might have to whip up a couple of 4GB Flash drives and install Linux on them and use them as "plug in folding" boot drives for my machines with GTX 560Ti's. I pretty much stopped Folding on them under Windows because they have become so worthless.


----------



## WiSK

Quote:


> Originally Posted by *47 Knucklehead*
> 
> Hmmm, I might have to whip up a couple of 4GB Flash drives and install Linux on them and use them as "plug in folding" boot drives for my machines with GTX 560Ti's. I pretty much stopped Folding on them under Windows because they have become so worthless.


If you aren't gaming on those machines, you can get the PC to boot up with the iGPU instead of the GTX560ti's and get a little extra. That way the OS won't bother your GPUs at all and folding can use the full capacity of your cards.


----------



## thegreener

Quote:


> Guys any reference times and PPD for a popular card running the new 8901?


Got a 8901 today

Hd5870 driver 13.4

PPD 11600
TPF 21 min

GPU 99% load

Greetings


----------



## [CyGnus]

This weather (40ºc) is killing me... my poor 7970 is at 72ºc


----------



## arvidab

Quote:


> Originally Posted by *bfromcolo*
> 
> I fired mine up with the advanced flag and got an 8900.
> 
> Quick question, the client lets you add both client-type advanced and beta at the same time if you want, whats the effect of having both? Does the order matter?


You should only have one or the other, it will probably disregard one of you have both (don't know which one) or just refuse to work (but that doesn't seem to be the case). v6 didn't work if you had _advmethods_ and _beta_ flags at the same time iirc.

The order of the flags/options does not matter.


----------



## DEW21689

Quote:


> Originally Posted by *[CyGnus]*
> 
> install the 13.x http://www.guru3d.com/files_details/amd_catalyst_13_x_(13_150_1_june_21)_download.html


Figured I'd get back to you on this... I tried using this and holy mother of dysfunctional drivers! I was almost convinced this was a virus or designed to kill my system. I had like 50 errors during installation, and when I restarted my system it flat out broke my crossfire, had like another 300 errors saying blah blah blah can't start, system hard locked and I had to do a forced shut down. After turning my system back on I again had a crap ton of errors on my screen but I couldn't do anything, mouse wouldn't work, no sound, keyboard wouldn't respond nothing. I legit had to restart my system with the windows disc in my DVD drive to perform a system restore before logging into Windows.

If this works for others fantastic, but use at your own risk!


----------



## [CyGnus]

I did not have any problems with them and they give me +3k compared to 13.6b2 i just use one card though


----------



## neurotix




----------



## error-id10t

Quote:


> Originally Posted by *Hukkel*
> 
> Guys any reference times and PPD for a popular card running the new 8901?


fwiw, 8901 on mine is just about to finish (nice, it picked up another 8901 straight after):

PPD: 248422
TPF: 2 mins 43 secs

The 2nd one is now 5.5% into it's run and showing:

PPD: 252181
TPF: 2 mins 42 secs

The 2nd one has now changed, no errors:
PPD: 64732
TPF: 6 mins 47 secs

Compared to a 8900 the other card is running:

PPD: 73341
TPF: 5 mins 02 secs

I should switch the flag on that card too, not that I understand everything but 8901 looks much better.. Both core 17s are using ~13% of CPU each.


----------



## snipekill2445

Quote:


> Originally Posted by *[CyGnus]*
> 
> This weather (40ºc) is killing me... my poor 7970 is at 72ºc


72c you say?


----------



## gboeds

um, which is the real 8901? The one I finished earlier with 4:50 TPF for 35k points (104k ppd)

or this one:


----------



## scubadiver59

Quote:


> Originally Posted by *DEW21689*
> 
> Figured I'd get back to you on this... I tried using this and holy mother of dysfunctional drivers! I was almost convinced this was a virus or designed to kill my system. I had like 50 errors during installation, and when I restarted my system it flat out broke my crossfire, had like another 300 errors saying blah blah blah can't start, system hard locked and I had to do a forced shut down. After turning my system back on I again had a crap ton of errors on my screen but I couldn't do anything, mouse wouldn't work, no sound, keyboard wouldn't respond nothing. I legit had to restart my system with the windows disc in my DVD drive to perform a system restore before logging into Windows.
> 
> If this works for others fantastic, but use at your own risk!


Umm...F8 and safemode? Much easier!


----------



## neurotix

Quote:


> Originally Posted by *gboeds*
> 
> um, which is the real 8901? The one I finished earlier with 4:50 TPF for 35k points (104k ppd)
> 
> or this one:


I have the same problem...



What is up with this?

Also, earlier, had a 8901 that kept saying "bad state detected, resuming from last checkpoint" or something after it would finish 5%. I ended up dumping it, and now this.


----------



## martinhal

Quote:


> Originally Posted by *neurotix*
> 
> I have the same problem...
> 
> 
> 
> What is up with this?
> 
> Also, earlier, had a 8901 that kept saying "bad state detected, resuming from last checkpoint" or something after it would finish 5%. I ended up dumping it, and now this.


Check your GPU OC , I personally found that "bad state detected, resuming from last checkpoint" meant my OC was not stable.


----------



## AndyE

Quote:


> Originally Posted by *neurotix*
> 
> I have the same problem...
> 
> What is up with this?


Check your log file. There seems to be 2 sub-versions of P8901 units around. One has 2.500.000 timesteps, the other 1.000.000 timesteps - giving much faster TPF.
I am not sure if the actual points rewarded by the server reflect this, but the forecasts of client side utilities seem to be "confused" with their projected ppd.

.... It is beta and beta is there for testing out stuff ....

Andy


----------



## arvidab

Finally started core_17 in Linux on my 560Ti, added about 110W on top of just letting it sitting idle while folding on my CPU. Lets see if I get any PPD worth it.


----------



## WiSK

From foldingforum.org, p8901 has some issue:
Quote:


> Originally Posted by *diwakar*
> Thanks a lot for pointing out the mismatch in the number of steps. I am suspending this project from assignment as of now. I will release an updated project on Monday. The work server would still accept all the already assigned WUs.


----------



## $ilent

Quote:


> Originally Posted by *arvidab*
> 
> Finally started core_17 in Linux on my 560Ti, added about 110W on top of just letting it sitting idle while folding on my CPU. Lets see if I get any PPD worth it.


How did you do it? I want to fold my gtx 570 in linux.


----------



## DEW21689

Quote:


> Originally Posted by *scubadiver59*
> 
> Umm...F8 and safemode? Much easier!


Tried, couldn't do that either, system would boot into safemode and I would still be unable to do anything


----------



## arvidab

Quote:


> Originally Posted by *$ilent*
> 
> Quote:
> 
> 
> 
> Originally Posted by *arvidab*
> 
> Finally started core_17 in Linux on my 560Ti, added about 110W on top of just letting it sitting idle while folding on my CPU. Lets see if I get any PPD worth it.
> 
> 
> 
> How did you do it? I want to fold my gtx 570 in linux.
Click to expand...

I just installed through the _Driver Manager_ (Linux Mint, might be called something else in Ubuntu), choose the newest available in the list, _nvidia-313-updates_.

I also generated a _xorg.conf_ (Ubuntu-users shoudl have one, deals with the display)

In a terminal:

Code:



Code:


sudo nvidia-xconfig

I then rebooted and added a GPU slot in FAHControl and away it went.

That said, it adds about 110-120W and the 8900 I got is 22k PPD/11:15 TPF, stock clocks, but I have to reduce my SMP folding count by one to just three cores. __17_ takes a core for itself, _Xorg_ now uses up to 22% (of a core) so my SMP PPD is down from 28k to 12k on this P6354.

The CPU also runs in 80's (GPU is not an exhaust) and the unit have restarted two times now (sign of instability). I just reduced my SMP to only two and took off my side panel. Let's see how it goes...


----------



## bfromcolo

Quote:


> Originally Posted by *WiSK*
> 
> From foldingforum.org, p8901 has some issue:


Quote:


> Originally Posted by diwakar
> Thanks a lot for pointing out the mismatch in the number of steps. I am suspending this project from assignment as of now. I will release an updated project on Monday. The work server would still accept all the already assigned WUs.


So does that mean I would need to go back to "advanced" to pick up a 8900 when the 8901 I am working completes? Or do I just leave it on "beta"?


----------



## WiSK

Might as well leave it on beta, you'll pick up p8900 in the absence of other beta projects. Hopefully on Monday they will improve the stability and the ppd of p8901.


----------



## snipekill2445

I just got two 8900's on Beta. Back to 100K ppd, over 90k PPD of 8901.


----------



## BWG

Can you OC the GPU in Linux, or do you have to flash the bios on the card to hard code a default clock/voltage in?


----------



## anubis1127

Quote:


> Originally Posted by *BWG*
> 
> Can you OC the GPU in Linux, or do you have to flash the bios on the card to hard code a default clock/voltage in?


Easist way is to probably flash your stable OC to the card. I believe there are methods to OC in linux, I haven't explored them yet though.


----------



## BWG

I've never used Mint before, but it looks interesting. It has a Windows feel to it sort of like suSE used to back in the XP days.


----------



## anubis1127

Quote:


> Originally Posted by *BWG*
> 
> I've never used Mint before, but it looks interesting. It has a Windows feel to it sort of like suSE used to back in the XP days.


Mint is essentially Ubuntu with a different Desktop Environment, there are some subtle differences, but not major (at least not that I can tell). I always just install XFCE anyway, so I can be using Mint, or Ubuntu, and have pretty much the same experience.


----------



## DizZz

Quote:


> Originally Posted by *anubis1127*
> 
> Mint is essentially Ubuntu with a different Desktop Environment, there are some subtle differences, but not major (at least not that I can tell). I always just install XFCE anyway, so I can be using Mint, or Ubuntu, and have pretty much the same experience.


Ubuntu Server is by far my favorite. It's incredibly easy to update the kernel, it's a very stripped os so no wasted resources, and the command line is all you need


----------



## anubis1127

Quote:


> Originally Posted by *DizZz*
> 
> Ubuntu Server is by far my favorite. It's incredibly easy to update the kernel, it's a very stripped os so no wasted resources, and the command line is all you need


Well right, that is what I run on my dedicated folder, I was speaking for times when you'd want a GUI.


----------



## DizZz

Quote:


> Originally Posted by *anubis1127*
> 
> Well right, that is what I run on my dedicated folder, I was speaking for times when you'd want a GUI.


Ah ok my mistake. I prefer openbox + a panel like tint but I'm a minimalist freak so that might not be suitable for everyone. XFCE is my second favorite gui though and it's a lot more user friendly.


----------



## $ilent

What is a good tpf for a 7870 running a 8900?


----------



## $ilent

anyone?


----------



## El-Fuego

Quote:


> Originally Posted by *$ilent*
> 
> What is a good tpf for a 7870 running a 8900?


mine is about 8:25 but I'm having some problems now and my 8900 been stuck on 99.99% for few hours now, not sure if that number is accurate or not


----------



## WiSK

Quote:


> Originally Posted by *El-Fuego*
> 
> mine is about 8:25 but I'm having some problems now and my 8900 been stuck on 99.99% for few hours now, not sure if that number is accurate or not


Did you try to pause the unit and restart?


----------



## STW1911

Quote:


> Originally Posted by *$ilent*
> 
> What is a good tpf for a 7870 running a 8900?


I have an MSI 7870 HAWK, running 1250/1375 and I'm getting about 5:27 tpf on the 8900's if that help you


----------



## $ilent

thanks, my tpf is same at those clock.


----------



## arvidab

Quote:


> Originally Posted by *STW1911*
> 
> Quote:
> 
> 
> 
> Originally Posted by *$ilent*
> 
> What is a good tpf for a 7870 running a 8900?
> 
> 
> 
> I have an MSI 7870 HAWK, running 1250/1375 and I'm getting about 5:27 tpf on the 8900's if that help you
Click to expand...

Is yours Pitcairn or Tahiti LE though?


----------



## $ilent

Mine is tahiti le


----------



## arvidab

Yep, I know. Was thinking about STW's Hawk. In theory yours should do better, silent. But I don't know if that is true.


----------



## $ilent

exactly it doesnt make sense, mine should be faster considering its a cut down 7950.


----------



## STW1911

Quote:


> Originally Posted by *arvidab*
> 
> Is yours Pitcairn or Tahiti LE though?


It's a Pitcairn. I'm trying to figure out if there is any way to get a little more out of it though. July Foldathon coming up, and I want more if I can get it. Any help would be appreciated.


----------



## $ilent

Eh how come your pitcarin is same tpf as my tahiti at same clock?


----------



## STW1911

Quote:


> Originally Posted by *$ilent*
> 
> Eh how come your pitcarin is same tpf as my tahiti at same clock?


Not sure, mine is clocked at 1250 core, from 1100 stock, and 1375 mem, from 1200 stock. It's been running 98% GPU usage at about 60c. Maybe I just got a good card, or you just got a not so good card? What 7870 do you have?


----------



## Escatore

So just so that I'm current, what all do we need to do to enable the new p8900 WUs? I set my client-type to 'beta', but do we still need to do the

Code:



Code:


<extra-core-args>-gpu-vendor=ati</extra-core-args>

if we have 7.3.6 or is that old?


----------



## tictoc

The 8900 WU has been moved to 'advanced'. To get 8900 WU's use client-type 'advanced'. The extra-core-args flag is no longer necessary.

FYI there are now two new core_17 beta WU's: Projects 7810 and 7811. Here is Vijay's blog post about core_17 benchmarking.

Has anyone picked up any of these units, and if so how is the performance?


----------



## gboeds

Quote:


> Originally Posted by *tictoc*
> 
> The 8900 WU has been moved to 'advanced'. To get 8900 WU's use client-type 'advanced'. The extra-core-args flag is no longer necessary.
> 
> FYI there are now two new core_17 beta WU's: Projects 7810 and 7811. Here is Vijay's blog post about core_17 benchmarking.
> 
> Has anyone picked up any of these units, and if so how is the performance?


7811 on a 7970 @ 1200:

Avg. Time / Frame : 00:01:25 - 111,359.4 PPD (same card 8900: Avg. Time / Frame : 00:03:37 - 122,462.8 PPD)


----------



## error-id10t

Doing a 7811 at the moment.

TPF: 1mins 22secs .. PPD: 103416


----------



## ghostrider85

Quote:


> Originally Posted by *error-id10t*
> 
> Doing a 7811 at the moment.
> 
> TPF: 1mins 22secs .. PPD: 103416


What's your clockspeed on the 670?


----------



## Escatore

So if we have the client-type set to advanced on the GPU slot, will it necessarily give us only the new WUs? Or will it intermix them with old ones?

I set my GPU slot client-type to "advanced" and it gave me an 8074


----------



## error-id10t

Quote:


> Originally Posted by *ghostrider85*
> 
> What's your clockspeed on the 670?


I run mine at 1202Mhz but they never get nice, even utilisation.. always jumping around, even going down to 90% at times. SMP is set to 6 so they both have a thread.

Also this 7811 seems to "save" more often than the 8900.. seems like it's ~3 times more often (utilisation drops to 0).

I run mine on beta flag, don't want non-17 units lol. The non-17 units do have a better utilisation though.


----------



## scubadiver59

WTH???

Core-15 P8054 for 33.8k PPD???


----------



## tictoc

Quote:


> Originally Posted by *Escatore*
> 
> So if we have the client-type set to advanced on the GPU slot, will it necessarily give us only the new WUs? Or will it intermix them with old ones?
> 
> I set my GPU slot client-type to "advanced" and it gave me an 8074


Looking at the server status logs, it looks like the server ran out of WU's this morning, but according to the most recent log entry the server should be giving out WU's now. You can see the staus of all the [email protected] servers here: [email protected] Server Status

With the advanced flag you will get a mix of core_15 and core_17 WU's on your 670. The only time you will receive the old core_15 WU's is if they run out of core_17 or there is a problem with the core_17 server.


----------



## cam51037

Quote:


> Originally Posted by *scubadiver59*
> 
> WTH???
> 
> Core-15 P8054 for 33.8k PPD???


Yeah I'm getting core 15 units as well. I could instantly tell because my card started whining. :/

EDIT: F&*#! Core 15 keeps getting Unstable Machine errors on a once stable OC for Core 17 units. I just reset the card to stock, but if it keeps failing I'll be deleting this unit.


----------



## Avonosac

I had my titan at 1202/ 1650 and was getting TPF of 1:11 on 7810. This WU seems to not like my previous OC on the titan, I'm dialing it back 26mhz and seeing if that will help keep the unit stable.

EDIT: This unit is weird, I'm getting wild fluctuations on TPF, everywhere from about 55 seconds to 3 minutes..

This is what I'm seeing now:


----------



## AndyE

Quote:


> Originally Posted by *Avonosac*
> 
> I had my titan at 1202/ 1650 and was getting TPF of 1:11 on 7810. This WU seems to not like my previous OC on the titan, I'm dialing it back 26mhz and seeing if that will help keep the unit stable.
> 
> EDIT: This unit is weird, I'm getting wild fluctuations on TPF, everywhere from about 55 seconds to 3 minutes..


uneven TPFs are part of the core17 design.
Use the time completed as basis for TPF and ppd calculation.

Here are some of my numbers. About 40 WUs with 7810/7811 were processed yesterday.

based on time completed and ppd calc by Bonus Calculator
All cards on stock frequencies

GTX Titan, 7810: TPF between 1:25 and 1:28, ppd = between 170k and 178k
GTX Titan, 7811: TPF between 1:05 and 1:08, ppd = between 155k and 166k

GTX 780, 7810: TPF = 1:42, ppd = 135k
GTX 780, 7811: TPF = 1:15, ppd = 134k

AMD 7970 GE, 7810: TPF = 2:06, ppd = 99k
AMD 7970 GE, 7811: TPF = 1:36, ppd = 93k


----------



## Avonosac

What was the clocks on your titans for those WUs?


----------



## AndyE

Quote:


> Originally Posted by *Avonosac*
> 
> What was the clocks on your titans for those WUs?


There is some slight variation with clocks on Titans.

All cards are set to an offset of 0 MHz (basically 836 MHz), but with the load of the core17 application, the heat generated/cooled, my cards are running between 992 and 1005 MHz actual frequency. Memory speed offset is also set to 0.
The power consumption with these units is between 78 and 82 % of TDP. A dual Titan system uses about 430 watt on the wall. (dual GTX 780 = 420 watt, dual 7970 = 380 watt)
With my cases, temperature for all cards is between 55 and 65 C. Fan speed is set between 70 and 80%.

Andy


----------



## Avonosac

That would explain why my numbers are higher than yours. Boost 2 sucks, but there is nothing we can do about that on stock bios. I have mine clocked at 1150mhz for the moment to make sure the OC is stable. You seem to have a lot of experience with gpu based folding, does increasing memory speed help with ppd?


----------



## AndyE

Quote:


> Originally Posted by *Avonosac*
> 
> You seem to have a lot of experience with gpu based folding, does increasing memory speed help with ppd?


Thanks for the credit, but no. I am only 2 months into folding. But I have 11 GPUs folding

and no, memory speed has no impact.


----------



## cam51037

How are you guys getting core 17 units? With the advanced flag my system is picking up core 15 units.


----------



## anubis1127

Quote:


> Originally Posted by *cam51037*
> 
> How are you guys getting core 17 units? With the advanced flag my system is picking up core 15 units.


Switch back to beta for now.


----------



## ChaosAD

I use the beta flag and i get the 7810 wu, at least the last three i checked. TPF ~2m 50s and 110k+ ppd when i open the FAH control, then if i let it open it drops to 60-70k with the same TPF.


----------



## Escatore

Quote:


> Originally Posted by *cam51037*
> 
> How are you guys getting core 17 units? With the advanced flag my system is picking up core 15 units.


Having this same problem.

I'm going to reset my client-type to beta and see if that helps.


----------



## martinhal

Quote:


> Originally Posted by *Escatore*
> 
> Having this same problem.
> 
> I'm going to reset my client-type to beta and see if that helps.


I read around here the advanced server is having issues. I set mine back to beta and been picking up 7811 and 7810 core 17 wu's


----------



## snipekill2445

I just got a 8902 WU, there is no way this is right. Well see what happens soon.


----------



## anubis1127

Quote:


> Originally Posted by *snipekill2445*
> 
> I just got a 8902 WU, there is no way this is right. Well see what happens soon.


That looks about what I got earlier, and it finished, and uploaded: 15:01:03:WU01:FS00:Final credit estimate, 52250.00 points

I think they will likely adjust the WU, but for now, enjoy.


----------



## snipekill2445

I hope my teams 780 folder gets one of these 8902 units


----------



## martinhal

I have thrown everything I own ( hardware wise) at them







my TC rig has had two so far .


----------



## Matt*S.

So....I have this WU right now, showing me over 400k PPD....anyone offer any advice? I'm used to seeing a 0x17 core at 100k+ PPD.


----------



## cam51037

Quote:


> Originally Posted by *Matt*S.*
> 
> So....I have this WU right now, showing me over 400k PPD....anyone offer any advice? I'm used to seeing a 0x17 core at 100k+ PPD.


Those units give you crazy PPD, it's not the FAHControl glitching out.


----------



## anubis1127

Quote:


> Originally Posted by *Matt*S.*
> 
> So....I have this WU right now, showing me over 400k PPD....anyone offer any advice? I'm used to seeing a 0x17 core at 100k+ PPD.


Just enjoy them while they last, haha.

Woah, Saginaw, MI. I grew up around there, and my family still lives in the area.


----------



## WLL77

I too can attest to crazy ppd on 8902 wu.
Am getting 200k on a 7870 with a tpf of 3:12. pic below....


----------



## cam51037

Quote:


> Originally Posted by *WLL77*
> 
> I too can attest to crazy ppd on 8902 wu.
> Am getting 200k on a 7870 with a tpf of 3:12. pic below....


Just imagine if I fired up my 7850, 7950 and GTX 670 all to fold...

You'd be getting kind of close to 1M PPD with all those running I think.


----------



## valkeriefire

I just got one also. I had some network issues this morning and I was down for about 2 hours, but I've uploaded my last 8900 and gotten a 8902. 402k PPD. I wonder how long these will last?


----------



## cam51037

Quote:


> Originally Posted by *valkeriefire*
> 
> I just got one also. I had some network issues this morning and I was down for about 2 hours, but I've uploaded my last 8900 and gotten a 8902. 402k PPD. I wonder how long these will last?


Hopefully forever....


----------



## jmrios82

I just finished a 8902, and picked a 8900, the massive PPD is over for me.. Was good while it lasted


----------



## tictoc

Quote:


> Originally Posted by *cam51037*
> 
> Hopefully forever....


Folding Forum: Project 8902?
Quote:


> Re: Project 8902?
> Postby diwakar » Sun Jul 14, 2013 12:19 pm
> 
> This project was not supposed to be released but due to an a unexpected server restart some of these WUs were created and released. It is not being assigned now. I am still looking at the reason for the wrong number of steps in 8901.


----------



## El-Fuego

are we using advanced or beta now ?
I used to have beta then I saw here it changed to advanced and now people telling me it's beta again ?
lol, some one care to clarify ?


----------



## anubis1127

Either one should work. Yesterday there was an issue with the server that was affecting the p8900s on advanced so a lot of people got a few core 15s.

Advanced would still be recommended over beta in general by Stanford, as beta are just that, beta. Once a WU moves from beta it goes to advanced and then to mainline.


----------



## jmrios82

Quote:


> Originally Posted by *El-Fuego*
> 
> are we using advanced or beta now ?
> I used to have beta then I saw here it changed to advanced and now people telling me it's beta again ?
> lol, some one care to clarify ?


I also was using the advanced flag, I was getting 8900's with no issues, but yesterday all the WU's where core 15. I switched to beta again because of this, today I got those lovely 8902, but the 8902 are gone. Now with the beta flag I got a 8900, you can check my earlier post. So I think that I'll stick to beta until the issues are solved


----------



## mrwesth

Is it just me or are these fahcore17 units a little bit quirky when you shutdown/restart your pc.

I get ~70k ppd on 8900's on my 660ti
and ~30k ppd on 8900's on my 560ti 448

UNLESS
I restart the pc before completing a W/U then ppd will fall all the way to ~10k-20k on each card. I'm restarting for maybe 1-2min and then the client is back up. Even after running 20-30mins ppd/TPF are both terrible. Any ideas?

I never saw this happen with fahcore15...


----------



## anubis1127

Quote:


> Originally Posted by *mrwesth*
> 
> Is it just me or are these fahcore17 units a little bit quirky when you shutdown/restart your pc.
> 
> I get ~70k ppd on 8900's on my 660ti
> and ~30k ppd on 8900's on my 560ti 448
> 
> UNLESS
> I restart the pc before completing a W/U then ppd will fall all the way to ~10k-20k on each card. I'm restarting for maybe 1-2min and then the client is back up. Even after running 20-30mins ppd/TPF are both terrible. Any ideas?
> 
> I never saw this happen with fahcore15...


It is due to them having bonus points, Quick Return Bonus. They have a base point value of 6k, and then your credit scales up based upon how quickly the WU was completed after it was downloaded from the assignment server.

The core 15 units did not have bonus, so you could pause them, shutdown, resume them later, and it wouldn't effect your credit, or PPD estimate.


----------



## mrwesth

Quote:


> Originally Posted by *anubis1127*
> 
> It is due to them having bonus points, Quick Return Bonus. They have a base point value of 6k, and then your credit scales up based upon how quickly the WU was completed after it was downloaded from the assignment server.
> 
> The core 15 units did not have bonus, so you could pause them, shutdown, resume them later, and it wouldn't effect your credit, or PPD estimate.


I understand QRB system--or at least I think I do. But a 1-2 minute shutdown should not cause a 50-60k ppd hit.

I'm seeing a 80% decrease in production. The TPF is going from 2 minutes to 17. It just doesn't make sense.


----------



## anubis1127

Quote:


> Originally Posted by *mrwesth*
> 
> I understand QRB system--or at least I think I do. But a 1-2 minute shutdown should not cause a 50-60k ppd hit.
> 
> I'm seeing a 80% decrease in production. The TPF is going from 2 minutes to 17. It just doesn't make sense.


Ah, ok. What are you using for monitoring? HFM does a pretty descent job even after I pause to play a game.


----------



## mrwesth

Quote:


> Originally Posted by *anubis1127*
> 
> Ah, ok. What are you using for monitoring? HFM does a pretty descent job even after I pause to play a game.


Just the v7 client. Never seen the need for anything else.

Things are back to normal now but it had been folding all day at 30k ppd for 3930k+660ti+560ti after a restart and its the second time it has done that. Not quite sure what to think.


----------



## anubis1127

Quote:


> Originally Posted by *mrwesth*
> 
> Just the v7 client. Never seen the need for anything else.
> 
> Things are back to normal now but it had been folding all day at 30k ppd for 3930k+660ti+560ti after a restart and its the second time it has done that. Not quite sure what to think.


v7 FAHControl has never been terribly accurate for me. On my 2P it estimates my PPD off by up to ~100k ppd at times.


----------



## Avonosac

I had my titan at 1202 for a day, looks like it got me 207k and some change. But I came home on Monday morning to see my folding status bar in a bright red reading FAILED. So I figure at some point, the beta 7810 or 7811, didn't like my OC. Going to try with 1163 and see if the stress drop helps with stability. Would be great if my PPD stays over 200k


----------



## mrwesth

Weird, it's always been pretty dead on for me.

What gets me is TPF because I can look at the log and verify it is accurate--and it just doesn't make sense that a restart would cause the client to fold slower. Guess I'll just finish WU's before restarts even though that can be a pain.


----------



## Asiqduah

Quote:


> Originally Posted by *Avonosac*
> 
> I had my titan at 1202 for a day, looks like it got me 207k and some change. But I came home on Monday morning to see my folding status bar in a bright red reading FAILED. So I figure at some point, the beta 7810 or 7811, didn't like my OC. Going to try with 1163 and see if the stress drop helps with stability. Would be great if my PPD stays over 200k


Bleh I've got my 4GB Asus GTX680 clocked at 1228Mhz, and It barely pushes 97k. That's nice man!


----------



## anubis1127

Quote:


> Originally Posted by *Asiqduah*
> 
> Bleh I've got my 4GB Asus GTX680 clocked at 1228Mhz, and It barely pushes 97k. That's nice man!


That sounds about right, at 1202Mhz, I get ~ 92k on the p8900 work units.


----------



## Avonosac

Anyone else getting only 8900s even with -beta? These take a WHILE to get done and my ppd dropped a bit with em to about 198k


----------



## snipekill2445

The 89xx's are the good work units...


----------



## Avonosac

I was getting better PPD on the 7810s and 11s


----------



## anubis1127

Quote:


> Originally Posted by *Avonosac*
> 
> I was getting better PPD on the 7810s and 11s


I get better PPD on the 8900s vs the 7810s, odd. I only have a little baby gk104 though, must be different for the big boy Kepler.


----------



## ChaosAD

I was running boinc/wcg with all 4c/8t at 100% and with my 670 at 1215Ghz i was getting at FAH ~75k ppd. Now i set wcg at 90% to see if it ll make any difference in PPD. Btw i got this error in the log, anything important or just some random server communication issue?


----------



## anubis1127

Quote:


> Originally Posted by *ChaosAD*
> 
> I was running boinc/wcg with all 4c/8t at 100% and with my 670 at 1215Ghz i was getting at FAH ~75k ppd. Now i set wcg at 90% to see if it ll make any difference in PPD. Btw i got this error in the log, anything important or just some random server communication issue?


That message in the log is perfectly normal. You should see it whenever a new Work Unit is downloaded.


----------



## kpforce1

This looks to be an appropriate place to ask







. Out of curiosity, I'm running a GTX 480 @ 745/1894 in my 24/7 folding rig. Is anyone else running a GTX 480 and can share what kind of PPD they are getting with the core 17 WU's? On the 89xx WU's i appear to get around 32-34k ppd. I was debating on removing the beta flag and picking up some core 15 WU's to compare. Should I remove the beta flag and try picking up some core 15 WU or just stick with the 17 WU's?


----------



## anubis1127

Quote:


> Originally Posted by *kpforce1*
> 
> This looks to be an appropriate place to ask
> 
> 
> 
> 
> 
> 
> 
> . Out of curiosity, I'm running a GTX 480 @ 745/1894 in my 24/7 folding rig. Is anyone else running a GTX 480 and can share what kind of PPD they are getting with the core 17 WU's? On the 89xx WU's i appear to get around 32-34k ppd. I was debating on removing the beta flag and picking up some core 15 WU's to compare. Should I remove the beta flag and try picking up some core 15 WU or just stick with the 17 WU's?


That sounds about right for core 17. I was getting ~32k PPD on my 560ti 448 core @ 810mhz (that was its factory OC), and that is pretty close to a 480 in terms of performance.


----------



## gboeds

I have 2 GTX480s running with the beta flag, both at 800

Project 8900:

Avg. Time / Frame : 00:08:13 - 35,762.1 PPD

7810:

Avg. Time / Frame : 00:04:09 - 35,533.4 PPD

7811:

Avg. Time / Frame : 00:03:08 - 33,854.6 PPD

Core 15 WUs were not as good, best ones were the 7625 and 7626 which got from 31-34k ppd, most of the other core 15 WUs are around 24k ppd.


----------



## kpforce1

Quote:


> Originally Posted by *anubis1127*
> 
> That sounds about right for core 17. I was getting ~32k PPD on my 560ti 448 core @ 810mhz (that was its factory OC), and that is pretty close to a 480 in terms of performance.


Quote:


> Originally Posted by *gboeds*
> 
> I have 2 GTX480s running with the beta flag, both at 800
> 
> Project 8900:
> 
> Avg. Time / Frame : 00:08:13 - 35,762.1 PPD
> 
> 7810:
> 
> Avg. Time / Frame : 00:04:09 - 35,533.4 PPD
> 
> 7811:
> 
> Avg. Time / Frame : 00:03:08 - 33,854.6 PPD
> 
> Core 15 WUs were not as good, best ones were the 7625 and 7626 which got from 31-34k ppd, most of the other core 15 WUs are around 24k ppd.


Thanks for the input guys!







Just wanted to make sure I was using the right WU's for the 480 since its a 24/7 rig at work.


----------



## bfromcolo

Has anyone tried a 6750 with these work units? I have one sitting around but would need to buy a bigger power supply to add it to my Windows system, but if its going to get the same 5k it got on the other units there is no point. I can just continue to wait not so patiently for AMD folding in Linux.


----------



## cam51037

Quote:


> Originally Posted by *bfromcolo*
> 
> Has anyone tried a 6750 with these work units? I have one sitting around but would need to buy a bigger power supply to add it to my Windows system, but if its going to get the same 5k it got on the other units there is no point. I can just continue to wait not so patiently for AMD folding in Linux.


I think it might get less than 5k.


----------



## bfromcolo

Quote:


> Originally Posted by *cam51037*
> 
> I think it might get less than 5k.


That's what I am afraid of, I guess I can stop being lazy and actually plug it in there and find out.

Not that running to Microcenter and buying a power supply wouldn't be fun and all, and who knows what else I might find in there.


----------



## bfromcolo

6750 folding results on 8900 work unit.

ETA - 2.2 days
TPF - 32:14
Est credit - 10299
Est PPD - 4601

So >4x as long as my 7850 to run for 11% of the points.

Glad I decided to try it out before I bought a new power supply, investing to get this GPU folding doesn't make sense,


----------



## scubadiver59

Quote:


> Originally Posted by *bfromcolo*
> 
> 6750 folding results on 8900 work unit.
> 
> ETA - 2.2 days
> TPF - 32:14
> Est credit - 10299
> Est PPD - 4601
> 
> So >4x as long as my 7850 to run for 11% of the points.
> 
> Glad I decided to try it out before I bought a new power supply, *investing to get this GPU folding doesn't make sense*


Correct! Step up to a better GPU!!


----------



## scubadiver59

Quote:


> Originally Posted by *bfromcolo*
> 
> 6750 folding results on 8900 work unit.
> 
> ETA - 2.2 days
> TPF - 32:14
> Est credit - 10299
> Est PPD - 4601
> 
> So >4x as long as my 7850 to run for 11% of the points.
> 
> Glad I decided to try it out before I bought a new power supply, *investing to get this GPU folding doesn't make sense*


Correct! Step up to a better GPU!!

Edit: Hmm...another triple "auto" post!!!


----------



## bfromcolo

Quote:


> Originally Posted by *scubadiver59*
> 
> Correct! Step up to a better GPU!!


Yes I'll keep my eye on the market place for a cheap used NVDIA card I can stick in my Linux box, then I don't have to buy another power supply.


----------



## Escatore

These 8900s don't mess around



excuse the flatline - my 670 was temporarily borked.


----------



## valkeriefire

I am rarely getting 8900 units. I almost always get 7810s. Is this normal right now? Every once in awhile I get an 8900, but I've also gotten a core 16 or two


----------



## Avonosac

I do better on the 7810/11s than 8900 .. plus I seem to like units which normally finish faster... feels like more is going on xD


----------



## error-id10t

Yeah, for me the 7810 are slightly better than 7811 and both are better than 8900s.


----------



## Asiqduah

Yea it seems yesterday my Asus GTX 680 4GB 1306Mhz started getting nothing by 7810 back to back.... idk if this is good or bad... My PPD is staying the same at like 110k~, so I guess I shouldn't worry too much...


----------



## anubis1127

I would say that's a good thing


----------



## Avonosac

I would have to agree, any changes that have no negative impacts are OK in my book









Side note, does anyone with a 580 know if these core 17s or the 15s are better?


----------



## Shogon

Quote:


> Originally Posted by *Avonosac*
> 
> I would have to agree, any changes that have no negative impacts are OK in my book
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Side note, does anyone with a 580 know if these core 17s or the 15s are better?


I want to say 17s are better. I get around 45k PPD at 800 core on my 580. I'm thinking my core 15 numbers were in the 30ks.


----------



## Avonosac

Eggsellent.

If I have time to get the system set up tonight, I'll be running my HydroGen 580 on Ubuntu for some more points


----------



## Shogon

I'm on the v7 windows client so you might net some more PPD then me







months ago I tried running Ubuntu for SMP folding, lets just say I never could work the darn thing haha.


----------



## cam51037

Quote:


> Originally Posted by *Asiqduah*
> 
> Yea it seems yesterday my Asus GTX 680 4GB 1306Mhz started getting nothing by 7810 back to back.... idk if this is good or bad... My PPD is staying the same at like 110k~, so I guess I shouldn't worry too much...


You get 110k at those clocks? My 670 at 1280MHz gets around 100k on average I believe, might be off a bit but I think 110k for those clocks is a bit low. Have you oced your memory at all?


----------



## Avonosac

Quote:


> Originally Posted by *cam51037*
> 
> You get 110k at those clocks? My 670 at 1280MHz gets around 100k on average I believe, might be off a bit but I think 110k for those clocks is a bit low. Have you oced your memory at all?


As far as I know, memory OC doesn't help [email protected] at all..


----------



## DizZz

Quote:


> Originally Posted by *Avonosac*
> 
> As far as I know, memory OC doesn't help [email protected] at all..


Correct


----------



## cam51037

Oh oops, not sure why I have my memory oced then. Might as well put it to stock or even lower and take the core higher.


----------



## kpforce1

Quote:


> Originally Posted by *Avonosac*
> 
> I would have to agree, any changes that have no negative impacts are OK in my book
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Side note, does anyone with a 580 know if these core 17s or the 15s are better?


My 480 @ 750Mhz is pulling 43k+ on the 7810's at the moment


----------



## DizZz

Quote:


> Originally Posted by *cam51037*
> 
> Oh oops, not sure why I have my memory oced then. Might as well put it to stock or even lower and take the core higher.


Yeah good idea and downclocking the memory lowers gpu temps as well.


----------



## cam51037

Regarding the 7810 units, are they giving you guys lots of points as well?

My GTX 670 is estimated to get around 130k 140k PPD on these units, and it's said that for about half an hour now, so I don't think it's a glitch.


----------



## anubis1127

Quote:


> Originally Posted by *cam51037*
> 
> Regarding the 7810 units, are they giving you guys lots of points as well?
> 
> My GTX 670 is estimated to get around 130k 140k PPD on these units, and it's said that for about half an hour now, so I don't think it's a glitch.


FAHControl is terrible at estimating. Use HFM for more accurate estimates.


----------



## cam51037

Quote:


> Originally Posted by *anubis1127*
> 
> FAHControl is terrible at estimating. Use HFM for more accurate estimates.


Frock my PPD just went down to 75k PPD as I posted that. >


----------



## Avonosac

Quote:


> Originally Posted by *cam51037*
> 
> Frock my PPD just went down to 75k PPD as I posted that. >


Core 17 units are not stable in the FAHcontrol estimates... there are like 3 general types of frames you can calculate, the short, average, and long. On my titan I see 44 seconds 58 seconds and 1:14~ seconds. FHControl takes those numbers and just instantly multiplies that out for a 24 hour period with a bit of weighting, but not much. My titan gets ~ 200k ppd +/- 10k, and I see estimates of 110k-400k depending on how the last few frames have been (short short short for higher.. long long long for lower).


----------



## Asiqduah

Does anyone know what happened with the 8900 WU's? Are they still around?


----------



## DizZz

Quote:


> Originally Posted by *Asiqduah*
> 
> Does anyone know what happened with the 8900 WU's? Are they still around?


I've been getting them non stop on both my 680 and 660ti for the last 4 days (literally nothing else) and I'm using the Beta flag.


----------



## Wheezo

I'm using the advanced flag and get them a lot, just grabbed one ten minutes ago.


----------



## gboeds

Quote:


> Originally Posted by *Asiqduah*
> 
> Does anyone know what happened with the 8900 WU's? Are they still around?


folding 3 of them right now


----------



## Avonosac

Been getting mostly 7810/11s and I'm loving it.


----------



## Asiqduah

Hrm I've gotten nothing but 7810/11s for the past 47 WUs, after having nothing but 8900s for like 30+ WUs before that. Weird stuff lol.

Edit: I'm using the beta flag, but again I'm not too worried, I'm getting about the same PPD as I was with the 8900s.


----------



## Avonosac

I just hate the 8900s because they take my computer for a 4 hour ride. When I get home from work, if a WU just started.. I have 0 time to game that night before I pass out. With the 7810/11s I can at least finish a WU, game for an hour then sleep.


----------



## cam51037

Quote:


> Originally Posted by *Avonosac*
> 
> I just hate the 8900s because they take my computer for a 4 hour ride. When I get home from work, if a WU just started.. I have 0 time to game that night before I pass out. With the 7810/11s I can at least finish a WU, game for an hour then sleep.


lol 4 hours, thats all? My 670 takes 7+ hours on them.


----------



## valvehead

Quote:


> Originally Posted by *Avonosac*
> 
> I just hate the 8900s because they take my computer for a 4 hour ride.


4? Try 12!









In spite of the fact that it takes my 580 12 hours on a 8900, I like them better than other Core 17 units. They give me the best PPD, and I can set it to fold one unit overnight. With the shorter units it feels like a waste of time to fold a single unit, and I don't want to let it run lest I get stuck with an 8900 afterwards.

I'd love to get a 780 + a waterblock, but that's a bit out of the budget right now.


----------



## Avonosac

:| its more like 3 hours 45-50 minutes, but still... I hate being stuck off the system when I get home. With 7810/11 I have the option to turn [email protected] off before at a convenient time without losing bonus points.

Your 670 also cost a lot less than my titan did


----------



## snipekill2445

Gaming after work?

Ain't nobody got time fo dat!


----------



## snipekill2445

Double Post -_-


----------



## G3RG

What kind of ppd/tpf are you guys seeing on your 670/680s with the 8900 wu?

I'm getting 4:10-4:12, which is 97-99k ppd.


----------



## Asiqduah

Quote:


> Originally Posted by *G3RG*
> 
> What kind of ppd/tpf are you guys seeing on your 670/680s with the 8900 wu?
> 
> I'm getting 4:10-4:12, which is 97-99k ppd.


When I was getting 8900s I was getting like 110-115k PPD.

Edit: On my Asus GTX680 4GB clocked at 1306Mhz that is.


----------



## TheBadBull

"boo hoo 12 hours"

they are like this on my 5770










that's 7% in


----------



## anubis1127

Quote:


> Originally Posted by *TheBadBull*
> 
> "boo hoo 12 hours"
> 
> they are like this on my 5770
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> that's 7% in


Ouch. At that point you may be better off just folding core 16.


----------



## G3RG

Quote:


> Originally Posted by *Asiqduah*
> 
> When I was getting 8900s I was getting like 110-115k PPD.
> 
> Edit: On my Asus GTX680 4GB clocked at 1306Mhz that is.


I'm currently at 1359mhz.

My 1372mhz oc isn't folding stable :[


----------



## Avonosac

Folding on that old of an AMD gpu.. I have heard there wasn't really a point... like.. it wasn't worth the energy you spend on it..


----------



## Asiqduah

Quote:


> Originally Posted by *Avonosac*
> 
> Folding on that old of an AMD gpu.. I have heard there wasn't really a point... like.. it wasn't worth the energy you spend on it..


Yea that's why I stopped folding on my CPU, I was getting like 10k PPD from it running at 100% all day.


----------



## IvantheDugtrio

Quote:


> Originally Posted by *Avonosac*
> 
> I just hate the 8900s because they take my computer for a 4 hour ride. When I get home from work, if a WU just started.. I have 0 time to game that night before I pass out. With the 7810/11s I can at least finish a WU, game for an hour then sleep.


You know you can pause the work units or let them stop on their own when you want to game right?


----------



## gboeds

Quote:


> Originally Posted by *IvantheDugtrio*
> 
> You know you can pause the work units or let them stop on their own when you want to game right?


lol. of course they know....but pausing WUs that have QRB costs points....


----------



## Avonosac

Thanks gboeds


----------



## snipekill2445

I pause them, If I pause for an hour it doesn't lose enough points to a point I'd care.


----------



## Avonosac

Quote:


> Originally Posted by *snipekill2445*
> 
> I pause them, If I pause for an hour it doesn't lose enough points to a point I'd care.


I have enough things to usually do to blow an hour and a half and let the WU finish for max points... annoying when its an 8900 and I have to pause it.


----------



## anubis1127

Quote:


> Originally Posted by *Avonosac*
> 
> I have enough things to usually do to blow an hour and a half and let the WU finish for max points... annoying when its an 8900 and I have to pause it.


It can be, I paused a p8900 earlier to game for maybe an hour or so, and lost ~3k from the estimated credit.


----------



## Avonosac

Meh, I don't game for an hour lol. When I set aside time to game, I'm usually going to be there for 3-4 hours, so it is an annoyance, but admittedly a small one.


----------



## STW1911

I thought it was about the cause, not the points. If I have something to do that requires a lot of use out of my GPU, I just pause it, and I'm good to go. As long as the WU's are turned in before the deadline, and are not a bad WU, it's all good. Who cares about the points, it about the cause, not the points. Would you still fold if there was no points ? Why did you start to fold ? If people only started to fold because of the points and bragging rights of "mine gets more than yours", or "I have more points then you", then I hope they have nothing but problems while trying to fold. Just my 2 cents on the situation, and maybe something to think about.


----------



## Avonosac

Enjoying the competitive nature of the science is somehow a bad thing now? Do you get upset with everyone in the TC as well?


----------



## gboeds

and the whole reason behind the Quick Return Bonus is to encourage faster completion of the work....which IS good for the science....or did you think Stanford made QRBs just to make more points?

Not saying there is anything wrong with using your PC for what you want to when you want to, but others who decide to wait till a WU is finished are not necessarily folding for the wrong reasons.


----------



## snipekill2445

Quote:


> Originally Posted by *STW1911*
> 
> I thought it was about the cause, not the points.


To be honest with you, I couldn't care less about the 'cause', I only fold for the competitiveness.


----------



## mrwesth

Quote:


> Originally Posted by *snipekill2445*
> 
> To be honest with you, I couldn't care less about the 'cause', I only fold for the competitiveness.


...fully aware that I'm treading a delicate line/issue here, but I'll ask:
If competition is your only motivation then surely there are better options that channel the competitive spirit? I mean [email protected] production on a very basic level comes down to the question "how much do you want to spend?"

With that said I hope in some part the "cause" means something to you. Whether that cause is protein folding research, distributive computing in general, support of family/friends, or the goal of progress in general.

The only reason I bring all that up is that my main motivation for folding is probably my own interest in new technology and [email protected] is a good excuse to upgrade and try out new hardware. While not the most noble of reasons it is indirectly related to the cause... and does directly contribute to the cause. Additionally, all the feel good reasons and competitiveness do factor in on some level.

So I guess to complete my sort-of-thought-vomit I commend you for being honest with your motivation.


----------



## snipekill2445

I haven't spent a cent on folding. I just happen to have a PC that can earn alot of points.

As for the cause, I really only fold for points. If the points went away, you can bet I would too. (I know how much a muppet that makes me sound)


----------



## scubadiver59

Quote:


> Originally Posted by *snipekill2445*
> 
> I haven't spent a cent on folding. I just happen to have a PC that can earn alot of points.
> 
> As for the cause, I really only fold for points. If the points went away, you can bet I would too. (I know how much a muppet that makes me sound)


To each his own: if the volunteerism for research is a byproduct of your need for competitiveness, then who are we to complain!!


----------



## Donkey1514

Quote:


> Originally Posted by *scubadiver59*
> 
> To each his own: if the *volunteerism for research is a byproduct of your need for competitiveness*, then who are we to complain!!


^^^THIS


----------



## scubadiver59

Quote:


> Originally Posted by *Donkey1514*
> 
> ^^^THIS


Should I challenge him to a race to 500 million?


----------



## anubis1127

I like rice.


----------



## Donkey1514

I like your mommmmmmmmmmmmmmmmmmmmmmm..........................


----------



## BWG




----------



## kpforce1

Quote:


> Originally Posted by *scubadiver59*
> 
> Should I challenge him to a race to 500 million?


lol that would be like stock 1989 Ford Escort vs the 'LaFerrari' Supercar







.... but hey, LOOK


----------



## snipekill2445

Quote:


> Originally Posted by *scubadiver59*
> 
> Should I challenge him to a race to 500 million?


It's on!

*unleashes beastly 5450 to fold alll the points!*


----------



## BWG

500 million not 500 points pony









LOL


----------



## snipekill2445

You're just jelly of the mighty 5450!


----------



## d3cryptncompute

Quote:


> Originally Posted by *bfromcolo*
> 
> 6750 folding results on 8900 work unit.
> 
> ETA - 2.2 days
> TPF - 32:14
> Est credit - 10299
> Est PPD - 4601
> 
> So >4x as long as my 7850 to run for 11% of the points.
> 
> Glad I decided to try it out before I bought a new power supply, investing to get this GPU folding doesn't make sense,


Which AMD driver are you using? I've been folding with 13.1 AMD drivers at 30-35 PPD (variably per 6990).


----------



## bfromcolo

Quote:


> Originally Posted by *d3cryptncompute*
> 
> Which AMD driver are you using? I've been folding with 13.1 AMD drivers at 30-35 PPD (variably per 6990).


That's with 13.4. I get 42K PPD with my 7850 on the 8900 work units. I might be able to improve the 6750 performance with an older driver or SDK, but I doubt it would be much and it would affect the 7850.


----------



## anubis1127

Quote:


> Originally Posted by *bfromcolo*
> 
> That's with 13.4. I get 42K PPD with my 7850 on the 8900 work units. I might be able to improve the 6750 performance with an older driver or SDK, but I doubt it would be much and it would affect the 7850.


You are correct, with a modded driver you could slightly improve the performance on the 6750, but IMO it's probably not even worth messing with. For core 17 WUs the only AMD cards that seem to be worth it are the 78xx series and 79xx series.


----------



## Hemi177

What did you guys find to be best for folding on 7900 series? I am on 13.8 running my 7950 at 1025/1275 and getting 91K PPD on an 8900 unit. Wondering if I can get a bit more out of this by changing drivers.


----------



## DullBoi

I am truly impressed with these cores









2 years of no folding, only to return and do almost 3.5x my daily avg(55k) from 2011









It is rather great that more than a years folding points(from 2010-2011) can now be achieved in about 40days


----------



## hazara

Yeah man, I understand what you mean, I have been folding for years - used to be in top 200 but then the gfx folding came out... I folded for a while on my 1950 but now my 7770 id churning out wu like mad


----------



## bfromcolo

I picked up a used 460 to mess with in my Linux rig. What drivers are people using under Ubuntu for these?

Thanks


----------



## arvidab

I'm running 313.30 on my Mint 15 with my 560Ti, you have to use the proprietary Nvidia drives to make it work though.


----------



## bfromcolo

Quote:


> Originally Posted by *arvidab*
> 
> I'm running 313.30 on my Mint 15 with my 560Ti, you have to use the proprietary Nvidia drives to make it work though.


Thanks, I see NVIDIA is up to 319.49 with certified drivers for Linux x64. I guess what I was wondering is if there is any need to run specific drivers for proper performance with these WUs?


----------



## arvidab

I'd go with the newest certified driver. I'm not aware of any problems with them.


----------



## RushiMP

Are Core 17 work units heavily dependent on the CPU speed? They always seem to peg one core at 100% for each GPU that is folding.

I am trying to decide if I need to overclock an X5650 to adequately support 4 Fermi cards in a new GPU Folder I am assembling.


----------



## Zagen30

Not that I'm aware of. I think a lot of the "100% of one core" contains a lot of empty cycles; I know people have said they can feed 2 GPUs off of one core without any drop in production.


----------



## RushiMP

Quote:


> Originally Posted by *Zagen30*
> 
> Not that I'm aware of. I think a lot of the "100% of one core" contains a lot of empty cycles; I know people have said they can feed 2 GPUs off of one core without any drop in production.


Interesting. I was always worried about that. I even swapped out an X5698 for an overclocked L5520 in one of my GPU folders. I had it running at 3.6 with 1.25V, but the power supply is not happy about the 3 GTX 480s in there, so I just lowered it to 3.2 and 1.1V. Lets see how it goes.


----------



## arvidab

Quote:


> Originally Posted by *RushiMP*
> 
> Are Core 17 work units heavily dependent on the CPU speed? They always seem to peg one core at 100% for each GPU that is folding.
> 
> I am trying to decide if I need to overclock an X5650 to adequately support 4 Fermi cards in a new GPU Folder I am assembling.


Nvidia takes a core on _17 while AMD takes next to zilch. The tables have turned from the days of old.
It has to do with the implementation of Nvidias OpenCL drivers, iirc.


----------



## RushiMP

Interesting. Cost and electricity wise it still seems hard to beat Nvidia. At least for folding, I have heard it is a very different story for mining.


----------



## bfromcolo

I am getting failed to access core package errors in Linux. I tried setting the flag to beta and advanced, and removing and re-adding the slot. it downloads a different work unit but gives the same error.

It downloads the work unit and then gives me this:

18:36:42:WU00:FS00ownload complete
18:36:42:WU00:FS00:Received Unit: id:00 stateOWNLOAD error:NO_ERROR project:8900 run:438 clone:2 gen:9 core:0x17 unit:0x0000000f028c126651a68818e6ce6ea7
18:36:43:WU00:FS00ownloading core from http://www.stanford.edu/~pande/Linux/x86/NVIDIA/Fermi/Core_17.fah
18:36:43:WU00:FS00:Connecting to www.stanford.edu:80
18:36:48:ERROR:WU00:FS00:Exception: Failed to access core package.

Edit - requires 64 bit and I mistakenly installed 32...


----------



## PimpSkyline

For the past week the Stanford servers have been going Offline for me... what's going on? This is not how i wanted to start the new month with No WU's due to crap.


----------



## bfromcolo

Quote:


> Originally Posted by *PimpSkyline*
> 
> For the past week the Stanford servers have been going Offline for me... what's going on? This is not how i wanted to start the new month with No WU's due to crap.


They were moving some servers around, should be done now I think. Check the folding forum for details.


----------



## WiSK

Anyone noticed ppd loss with NVidia beta drivers 331.40?


----------



## [CyGnus]

Quote:


> Originally Posted by *WiSK*
> 
> Anyone noticed ppd loss with NVidia beta drivers 331.40?


with 327.23 i have 81K PPD and with 331.40 i dropped to 34K so 327.23 it is


----------



## anubis1127

Woah, is that on the same WU?


----------



## [CyGnus]

yes P7811


----------



## anubis1127

Ouch! That is harsh.


----------



## WiSK

Quote:


> Originally Posted by *[CyGnus]*
> 
> with 327.23 i have 81K PPD and with 331.40 i dropped to 34K so 327.23 it is


Thanks for confirming. I did a whole bunch of updating and changes to BIOS and stuff yesterday. Then noticed the ppd loss, so when I got home from work I started undoing all those changes one by one. Of course, the NVidia beta driver was the last thing I rolled back. How naïve...


----------



## Zagen30

These beta drivers are weird. They benefited my 780, as they were faster than 327.23 and almost as fast as 320.49. Over at EVGA there's another report of a 780 doing rather well with them, as well as a few non-780 Keplers that are seeing tanking similar to what's reported here.


----------



## anubis1127

Hmm, interesting. I'll give them a go on my 780 after the BGB finishes, fold a couple WUs, and report back my findings.


----------



## error-id10t

So the beta drivers are pretty bad.. do we know why yet? I see it's showing CUDA driver 6000, I think it was 50xx something previously, could this be causing the very low points on the beta?


----------



## arvidab

Idk about that, core_17 use OpenCL. Maybe it's included in the CUDA driver version.


----------



## martinhal

What is the best AMD driver for this wu ?


----------



## h46it

I thought I would contribute on the AMD folding points. I just picked up a Sapphire R9 280x Vapor-X this is my first WU with the card. PPD is fluctuating a bit, but this seems to be where it sits mostly:

http://imgbox.com/adsFXgho


----------



## anubis1127

Nice. Yeah, [email protected] is pretty terrible at estimating PPD on the p78xx WUs. Right now mine is at 149k even though I'm really getting around 112k on my 680.


----------



## h46it

Yeah it sucks...goes all over the place for 78xx WU's. Hopefully what ever is next for WU's will be able to predict better than these. As we speak i'm upto 140KPPD!


----------



## anubis1127

If you want to be a more accurate estimate, HFM.net does a good job. You can check it out here:

https://code.google.com/p/hfm-net/


----------



## h46it

Cool thanks!


----------



## RushiMP

Where have all the 8900 WU gone? Any more Zeta core units at this time?


----------



## anubis1127

Quote:


> Originally Posted by *RushiMP*
> 
> Where have all the 8900 WU gone? Any more Zeta core units at this time?


Right here homey:


----------



## RushiMP

It is strange. My FERMI's are all getting 8900, my Titans are chewing on some suboptimal Core 15.


----------



## anubis1127

Quote:


> Originally Posted by *RushiMP*
> 
> It is strange. My FERMI's are all getting 8900, my Titans are chewing on some suboptimal Core 15.


You are not alone, my 680 was getting "suboptimal" core 15 WUs in Windows as well. A lot of people are on gk104, gk106, and gk110 variants.


----------



## PimpSkyline

My Fermi is bouncing between the 15's and 17's, luckly my card is holding strong


----------



## BWG

Does Ubuntu play NBA 2K13?


----------



## anubis1127

Quote:


> Originally Posted by *BWG*
> 
> Does Ubuntu play NBA 2K13?


It can.


----------



## BWG

I'm in then.


----------



## RushiMP

I was knocking on the 1 million ppd club for a few days there, but now with my Titans getting punked I have shut them down and am just playing with the overclock on my Haswell.


----------



## valvehead

Grrrrr.


Spoiler: Warning: Spoiler!



Code:



Code:


******************************* Date: 2013-12-20 *******************************
01:40:09:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': Empty work server assignment
01:40:09:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Empty work server assignment
01:40:09:ERROR:WU00:FS01:Exception: Could not get an assignment
01:40:10:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': Empty work server assignment
01:40:10:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Empty work server assignment
01:40:10:ERROR:WU00:FS01:Exception: Could not get an assignment
01:41:58:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': 10002: Received short response, expected 272 bytes, got 23
01:42:54:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': 10002: Received short response, expected 272 bytes, got 23
01:42:54:ERROR:WU00:FS01:Exception: Could not get an assignment
01:43:48:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': 10002: Received short response, expected 272 bytes, got 23
01:44:29:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Empty work server assignment
01:44:29:ERROR:WU00:FS01:Exception: Could not get an assignment
01:45:31:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': Empty work server assignment
01:45:32:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Empty work server assignment
01:45:32:ERROR:WU00:FS01:Exception: Could not get an assignment
01:49:46:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': Empty work server assignment
01:49:46:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Empty work server assignment
01:49:46:ERROR:WU00:FS01:Exception: Could not get an assignment
******************************* Date: 2013-12-21 *******************************
01:56:37:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': Empty work server assignment
01:56:38:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Empty work server assignment
01:56:38:ERROR:WU00:FS01:Exception: Could not get an assignment
02:07:43:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:80': Empty work server assignment
02:07:43:WARNING:WU00:FS01:Failed to get assignment from 'assign-GPU.stanford.edu:8080': Empty work server assignment
02:07:43:ERROR:WU00:FS01:Exception: Could not get an assignment





Both of my 670's were getting 8900's just fine until this happened. I would gladly take Core 15 units over *NO* units.









EDIT: Stanford's assignment server is back up. I've got 8900's on all three of my GPU's.


----------



## Shogon

Past 3 days nothing but core 15s, I've done -advanced, -beta, even no gpu flaga, same old 15s lol.


----------



## dman811

Quote:


> Originally Posted by *Shogon*
> 
> Past 3 days nothing but core 15s, I've done -advanced, -beta, even no gpu flaga, same old 15s lol.


Same here. I have 5 hours 39 minutes left of a 7624 and then as valvehead said, I hope I get an 8900.


----------



## sayaman22

Quote:


> Originally Posted by *dman811*
> 
> Same here. I have 5 hours 39 minutes left of a 7624 and then as valvehead said, I hope I get an 8900.


I'm in the same boat. Been happening since the 19th.


----------



## dman811

Still dealing with 762x units. At least they are better than the 8018 units my GTS 450 gets all the time anyways.


----------



## Shogon

All I want for Christmas is my core 17s


----------



## anubis1127

Lol.


----------



## msgclb

Quoting VP:

Quote:


> We have several new core17 projects on the way as well as more core17 WUs for existing projects. I'll get an update from team members to see where that stands.


----------



## dman811

Quote:


> Originally Posted by *Shogon*
> 
> All I want for Christmas is my core 17s


Same.









Quote:


> Originally Posted by *msgclb*
> 
> Quoting VP:
> Quote:
> 
> 
> 
> We have several new core17 projects on the way as well as more core17 WUs for existing projects. I'll get an update from team members to see where that stands.
Click to expand...

Sounds good to me.


----------



## Shogon

Hope to get them soon! My 690 is working on a 8018 and a 7627 right now.


----------



## lanofsong

Finally, one of my 660ti's just picked up a 8900 WU after a long. long line of 76xx's.


----------



## Shogon

Quote:


> Originally Posted by *lanofsong*
> 
> Finally, one of my 660ti's just picked up a 8900 WU after a long. long line of 76xx's.


Wewt!

Do you have any GPU flags activated? 1 of my 690 GPUs just got one! Hazaah!


----------



## lanofsong

Using beta flag.


----------



## Shogon

Same here and so far 8900s


----------



## WiSK

Getting p8900s now again too


----------



## RushiMP

Awe poopsicle. I got some 17s now its back to 15s for my Titans.


----------



## BWG

RushiMP, do you want to fold one of those titans for my TC Team?


----------



## RushiMP

PM Sent


----------



## Nnimrod

Hello, I have a few questions









GTX 580 @ 900Mhz, 1055T @3.8Ghz

My 580 almost always folds 8018s, and gets right at 20k PPD, although once I got an 8900, and got peak PPD 41k, avg. PPD 14k. This is a core 17 WU, right? is pausing this what causes it to get crappy performance?

my 1055T folds several different units, I'll list them

6098 - *12.4k* by far the most common
7647 - *9.3k* what is this crap :/
7808 - 12.1k
8573 - 12.4k
9000 - *14.3k* - Is there a way to get more of these?
9002 - 12.9k
9005 - 12.9k
9006 - 13.7k

My main question is about core 17 WU's for the 580, as I understand that I should be able to get 40-45k PPD with them. Do they not work correctly if paused? I pause [email protected] whenever I'm using my computer, as I game quite a bit. I fold when at work, when sleeping, etc. Also, what drivers should I be using? I tried the first beta driver, but broke afterburner, and therefore my OC.

Also, any help is welcome on the 1055T, although its PPD does seem pretty insignificant compared to a 580.


----------



## tictoc

core_17 WU's are like SMP WU's and calculate base points + QRB (Quick Return Bonus). Pausing core_17 WU's will significantly lower the points, since points given are based on how fast you can complete the units. [email protected] FAQ: Points

The numbers on your 1055t are about what you should be getting at that CPU speed.

I don't have any NVIDIA cards to fold on, so I am not sure which driver is the best for a 580.


----------



## valvehead

Quote:


> Originally Posted by *Nnimrod*
> 
> Hello, I have a few questions
> 
> 
> 
> 
> 
> 
> 
> 
> 
> GTX 580 @ 900Mhz, 1055T @3.8Ghz
> 
> My 580 almost always folds 8018s, and gets right at 20k PPD, although once I got an 8900, and got peak PPD 41k, avg. PPD 14k. This is a core 17 WU, right? is pausing this what causes it to get crappy performance?
> 
> my 1055T folds several different units, I'll list them
> 
> 6098 - *12.4k* by far the most common
> 7647 - *9.3k* what is this crap :/
> 7808 - 12.1k
> 8573 - 12.4k
> 9000 - *14.3k* - Is there a way to get more of these?
> 9002 - 12.9k
> 9005 - 12.9k
> 9006 - 13.7k
> 
> My main question is about core 17 WU's for the 580, as I understand that I should be able to get 40-45k PPD with them. Do they not work correctly if paused? I pause [email protected] whenever I'm using my computer, as I game quite a bit. I fold when at work, when sleeping, etc. Also, what drivers should I be using? I tried the first beta driver, but broke afterburner, and therefore my OC.
> 
> Also, any help is welcome on the 1055T, although its PPD does seem pretty insignificant compared to a 580.


Pausing any unit that has QRB (Quick Return Bonus) will seriously impact PPD. This includes GPU Core 17 units (8900, 7810, & 7811) and all CPU SMP units. The faster you complete them and return the results to Stanford's servers, the more points you get per unit. This means that the impact on PPD is non-linear; i.e. if you complete units twice as fast, you will get more than twice the PPD. The same goes for pausing units. If you take twice as long to complete units, your PPD will drop by more than half.

You should be using 327.23 or earlier for your 580. You should stay away from 331.xx drivers for now since they are bad for folding on any card that is not a 780, 780ti or Titan.

There's not much you can do about your 1055t. Folding on mainstream CPU's is barely worth it now that GPU Core 17 units are here.

On top of that, if you are folding Core 17 units on an nVidia GPU, you have to dedicate a CPU thread to that task for every GPU that is folding Core 17 units. For example, I have two GTX670's in my home server, and I had to reduce the CPU folding threads from 8 to 6. This caused a significant decrease in CPU PPD..


----------



## Nnimrod

Quote:


> Originally Posted by *valvehead*
> 
> Pausing any unit that has QRB (Quick Return Bonus) will seriously impact PPD. This includes GPU Core 17 units (8900, 7810, & 7811) and all CPU SMP units. The faster you complete them and return the results to Stanford's servers, the more points you get per unit. This means that the impact on PPD is non-linear; i.e. if you complete units twice as fast, you will get more than twice the PPD. The same goes for pausing units. If you take twice as long to complete units, your PPD will drop by more than half.
> 
> You should be using 327.23 or earlier for your 580. You should stay away from 331.xx drivers for now since they are bad for folding on any card that is not a 780, 780ti or Titan.
> 
> There's not much you can do about your 1055t. Folding on mainstream CPU's is barely worth it now that GPU Core 17 units are here.
> 
> On top of that, if you are folding Core 17 units on an nVidia GPU, you have to dedicate a CPU thread to that task for every GPU that is folding Core 17 units. For example, I have two GTX670's in my home server, and I had to reduce the CPU folding threads from 8 to 6. This caused a significant decrease in CPU PPD..


so it looks like all I can do for right now is roll back to 327.23 drivers and keep folding those 8018s.

Building a dedicated folding rig would be cool, but just so many thing higher priority right now :s\

thanks for help


----------



## dman811

Back to the 8018s for me!


----------



## Mitche01

Using the Beta flag at the moment but one of my GPUs is just about to pull a 8018!
Don't know what's going on there!


----------



## FlyingNugget

Quote:


> Originally Posted by *Mitche01*
> 
> Using the Beta flag at the moment but one of my GPUs is just about to pull a 8018!
> Don't know what's going on there!


Same man.


----------



## Mitche01

Quote:


> Originally Posted by *FlyingNugget*
> 
> Same man.


Yep all 4 of my 600 cards have now pulled 8018 wu but at least we are still folding!


----------



## BWG

Well, at least I'm still pulling 8900's.


----------



## dman811

You and your 570 are stealing our units BWG


----------



## BWG

You could just install Ubuntu you know.


----------



## dman811

I also just realized that my HTPC stole an 8900 as well (GTS 450, no beta or advanced tags). Maybe they are making them standard units now and trying to really finish off the 8018s?


----------



## Mitche01

Quote:


> Originally Posted by *BWG*
> 
> Well, at least I'm still pulling 8900's.


You can really go off people!
But luckily we are back again on 8900!


----------



## dman811

Ya true, the thing I am happiest about with Core 17 on my HTPC is the temps on my GPU. Core 15 they were in the high 60s, Core 17 they are in the high 40s.


----------



## Nnimrod

So I switched over to 327.23, and happened to get an 8900. I was getting 36k PPD







I was also folding on CPU at the same time, and only getting 9.8k PPD. Unfortunately, this was on a WU I have not had before (8569), so I don't know whether the lower than average CPU PPD is because the 8900 is more CPU intensive than 8018s or because 8569 is just a low PPD WU. Either way I'll try pausing the CPU folding and just working on that 8900 when I go to work tonight.

Also, reverting to 327.23 drivers broke afterburner again >.> Now the OC I was using is gone, and I don't seem to be able to apply a new one. Are there older versions of Afterburner available for use with older Nvidia drivers? When I say Afterburner is broke, I mean it displays values for the temps and the memory usage, but everything else just reads "0", except GPU usage, which has a line at zero, but is labeled 4294967296.


----------



## dman811

Back to the 8018s again...


----------



## BWG

8900


----------



## dman811




----------



## BWG

8900?









7660...


----------



## Widde

What's up with my R9 290? Its stuck at 0% gpu usage and i've added the client-type beta and only getting around 4k ppd from the gpu


----------



## dman811

I have no clue why my school computer with a GT430 can pick up an 8900 but my 660 Ti can't find poop. 52 minutes and 34 seconds TPF, and I thought when my GTS 450 got one it was a long time.


----------



## RushiMP

I walked into my office this morning and I was uneasy at how quiet it was. Dangit, its going to be 30F here tonight, give me an 8900 and take my electricity...!


----------



## ZDngrfld

Quote:


> Originally Posted by *RushiMP*
> 
> its going to be 30F here tonight, give me an 8900 and take my electricity...!


Sounds like a heatwave!


----------



## RushiMP

Quote:


> Originally Posted by *ZDngrfld*
> 
> Sounds like a heatwave!


LOL, Coldest night of year most likely. Cold enough that I will have to shut down my office AC unit to prevent damage.


----------



## dman811

It's supposed to be -20°F tomorrow morning. No matter how much I hate them, it might be a good time to get some 8018s out of the way for me and keep my computer colder.


----------



## RushiMP

Quote:


> Originally Posted by *dman811*
> 
> It's supposed to be -20°F tomorrow morning. No matter how much I hate them, it might be a good time to get some 8018s out of the way for me and keep my computer colder.


My GPUs seem to run much warmer with anything other than 8900s.


----------



## dman811

Oh mine too, although that's my point, 8018s run hotter on my computer, why not have my room warm for me when I get home?


----------



## RushiMP

Gotcha gotcha, it seems my rigs have spooled back up with 8900s


----------



## dman811

Mind throwing me one?


----------



## Captain_cannonfodder

8018's here.


----------



## Mitche01

Hmm... back on the core15s again!


----------



## BWG

867's and 5309's here


----------



## Mitche01

Two core15s down and back on the core17s! The supply must be getting up to speed.


----------



## Hemi177

Unable to pick up any units on my gpus


----------



## dman811

Same


----------



## Captain_cannonfodder

Might think about getting a faster 775 CPU.


----------



## RushiMP

Looks like were back in business.


----------



## Captain_cannonfodder

Stats server is still on the blink.


----------



## Mitche01

Anyone know what core 17 wu 10641 are like? All my hfm states is unknown so i guess they are new!


----------



## msgclb

Read about it here...



Spoiler: Core 17 - 10461 WU



*New GPU Project 10461*

by *kyleb* » Wed Jan 08, 2014 5:31 pm

This is a Core17 project. In this project we are running simulations of the important cancer-related protein, EGFR, which will help to provide insight into how we might eventually develop more effective therapies for the various types of cancer associated with EGFR mutations.

3663 points
12.7 deadline
9.86 timeout

FYI this is the first project running on our workservers at Memorial Sloan Kettering Cancer Center, so people will probably notice the non-Stanford IP addresses. If people notice any strange server issues, it may be due to the new server location.



*Link*

I haven't gotten one of these WUs but I'll be keeping a close watch for one.


----------



## WiSK

Quote:


> Originally Posted by *Mitche01*
> 
> Anyone know what core 17 wu 10641 are like? All my hfm states is unknown so i guess they are new!


Edit | Preferences | Web Settings | Project Download URL = http://fah-web.stanford.edu/psummaryC.html
Tools | Download Projects from Stanford

Two new ones: 10640 and 10641


----------



## arvidab

Quote:


> Originally Posted by *Mitche01*
> 
> Anyone know what core 17 wu 10641 are like? All my hfm states is unknown so i guess they are new!


Quote:


> Originally Posted by *msgclb*
> 
> Read about it here...
> 
> 
> Spoiler: Core 17 - 10461 WU
> 
> 
> 
> *New GPU Project 10461*
> 
> 
> by *kyleb* » Wed Jan 08, 2014 5:31 pm
> This is a Core17 project. In this project we are running simulations of the important cancer-related protein, EGFR, which will help to provide insight into how we might eventually develop more effective therapies for the various types of cancer associated with EGFR mutations.
> 
> 3663 points
> 
> 12.7 deadline
> 
> 9.86 timeout
> 
> FYI this is the first project running on our workservers at Memorial Sloan Kettering Cancer Center, so people will probably notice the non-Stanford IP addresses. If people notice any strange server issues, it may be due to the new server location.
> 
> 
> *Link*
> 
> I haven't gotten one of these WUs but I'll be keeping a close watch for one.


Question is, how will the points look? Initial report in that threads show very low PPD. I would expect it to change though, for the better hopefully.
I wish they would come out with some GPU-BA...

Quote:


> Originally Posted by *WiSK*
> 
> Quote:
> 
> 
> 
> Originally Posted by *Mitche01*
> 
> Anyone know what core 17 wu 10641 are like? All my hfm states is unknown so i guess they are new!
> 
> 
> 
> Edit | Preferences | Web Settings | Project Download URL = http://fah-web.stanford.edu/psummaryC.html
> Tools | Download Projects from Stanford
> 
> Two new ones: 10640 and 10641
Click to expand...

Was just about to post that, only 10641 is active though, 10640 is set to accept only meaning it will not be issued.


----------



## Mitche01

Well so far no ppd estimates but my tpf for these new wu is about 30secs longer than 8900 wu and score per wu arw about the same.

3 x p8900 = 55k for 9:30 tpf
3 x p10641 = 55k for 10:00 tpf

So not much in it really.


----------



## dman811

That's better than these 8018s, so I am more than willing to take one if Stanford wants to assign me one in an hour and 15 minutes.


----------



## Captain_cannonfodder

Still munching through 8900's here.


----------



## Nnimrod

So I just removed my 580 and installed a 770, Configure>Slots>Add>Select the GPU category, and under extra slot options I add client-type, advanced. I get this 
Press save, and get nothing. Nothing changes, no new GPU client running.

What am I doing wrong?


Spoiler: Warning: Spoiler!



02:47:44:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:45:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:45:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:45:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:45:Saving configuration to config.xml
02:47:45:
02:47:45: 
02:47:45:
02:47:45:
02:47:45: 
02:47:45:
02:47:45:
02:47:45: 
02:47:45:
02:47:45:
02:47:45:
02:47:45:
02:47:45: 
02:47:45:
02:47:45:
02:47:45:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:46:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:46:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:46:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:46:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:47:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:47:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:47:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:47:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:48:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:48:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:48:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:48:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:49:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:49:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:49:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:49:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:50:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:50:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:50:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:50:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:51:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:51:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:51:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:51:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:52:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:52:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:52:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:52:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:53:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:53:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:53:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:53:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:54:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:54:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:54:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:54:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:55:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:55:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:55:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:55:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:56:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:56:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:56:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:56:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:57:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:57:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:57:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:57:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:58:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:58:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:58:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:58:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:59:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:59:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:59:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:47:59:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:00:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:00:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:00:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:00:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:01:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:01:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:01:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:01:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:02:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:02:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:02:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:02:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:03:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:03:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:03:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:03:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:04:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:04:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:04:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:04:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:05:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:05:ERROR:Exception: Option 'gpu-index' has no default and is not set.
02:48:05:ERROR:Exception: Option 'gpu-index' has no default and is not set.


Log ^ It says GPU-index is not set, but I tried the default (-1), 0, and 1. None of them worked.


----------



## BWG

I've never seen slot # -1


----------



## msgclb

Quote:


> Originally Posted by *BWG*
> 
> I've never seen slot # -1




Notice it's best to leave the slot as -1 to allow the client to choose the supported GPU unless you're an *expert*.

When the client starts you'll never see -1.


----------



## msgclb

I've searched for someone that had this problem and solved it but I failed.

First if you're going to set the slot to 0, 1, etc you need to know what slot your card is in.

You probably have your GPU is in slot 0 but since you tried most if not all the slots maybe the config file has been corrupted.

If I ran into this problem I'd uninstall and do a clean install with the hope the install will correctly identify your hardware.

With that said, I'm going to bed!


----------



## Mitche01

Quote:


> Originally Posted by *Mitche01*
> 
> Well so far no ppd estimates but my tpf for these new wu is about 30secs longer than 8900 wu and score per wu arw about the same.
> 
> 3 x p8900 = 55k for 9:30 tpf
> 3 x p10641 = 55k for 10:00 tpf
> 
> So not much in it really.


Damn it spoke too soon... p10461 are brining me 13k points per work unit..still better than 8018 but not as good as 8900s.

Ppd dropped from 82k to54k for the three gtx650tis running in my folder


----------



## Nnimrod

reinstalling now...

after reinstalling, everything going as it should. 99k PPD @ 1137Mhz (stock) on an 8900. GTX 770 knows how to fold


----------



## Mitche01

Quote:


> Originally Posted by *Nnimrod*
> 
> reinstalling now...
> 
> after reinstalling, everything going as it should. 99k PPD @ 1137Mhz (stock) on an 8900. GTX 770 knows how to fold


Good news Nnimrod!


----------



## dman811

Should I try removing the client-type advanced and beta tags that I have been trying as my GTS 450 gets Core 17 units just fine without them, but my 660 Ti can't get them for cow poop.


----------



## Mitche01

I would drop the advanced but keep the beta flag


----------



## dman811

Quote:


> Originally Posted by *Mitche01*
> 
> I would drop the advanced but keep the beta flag


I only use one at a time, although neither allows me to grab a Core 17 of any sort.

Edit: Spoke too soon, just submitted an 8018 and picked up an 8900.


----------



## msgclb

I've got 3 NV cards that are running xubuntu and were running the core 17 8900 WU when available.

I just noticed that all three are now running the core 17 10461 WU.

TC GTX 780 @ 1110 MHz Boost

TPF: 0:03:14 105,073 PPD

GTX 660 Ti @ 1124 MHz Boost

TPF: 0:05:49 43,854 PPD

GTX 660 @ 1176 MHz Boost

TPF: 0:07:13 31,794 PPD

The GTX 780 running the 8900 WU was around 160k PPD.

Now I'm wondering if I should switch these cards back to Windows.


----------



## Widde

I smacked a 2nd R9 290 in my rig and holy ****, Seems to be a bit of a difference on wu 16 and 17, have a wu16 8583 that is reporting 0% gpu load and around 3-4k ppd and the wu17 8900 95k ppd and pegged at 100% load









Been watching AGDQ and folding nonstop almost


----------



## Captain_cannonfodder

Nothing but 8018's


----------



## aas88keyz

I don't know what the difference is but I haven't received anything but core 17's on all three of my gpu's since the 17 came out. Is it a linux thing? Maybe a gpu generation or driver thing? What ever it is I hope I don't make a change to my system to break my pattern.


----------



## dman811

Do you fold under Linux? If so, that is the reason why, Linux only allows Core 17 units through.


----------



## aas88keyz

Quote:


> Originally Posted by *dman811*
> 
> Do you fold under Linux? If so, that is the reason why, Linux only allows Core 17 units through.


Ahh... thank you Linux. Thanks for the info. +Rep to you.


----------



## dman811

No problem and thank you.


----------



## shlunky

Quote:


> Originally Posted by *Captain_cannonfodder*
> 
> Nothing but 8018's


All I have been able to get for the last few days as well...
§


----------



## dman811

Beta tag or Advanced tag?


----------



## shlunky

I am using the advanced tag. Still 8018's all day.


----------



## dman811

Swap to the Beta tag, you will more than likely get an 8900 right away.


----------



## amang

Just tried 'beta' tag today on my GTX cards, and I've got two 8900 on both my Nvidia cards.

What's the difference between 8018 and 8900?


----------



## dman811

P8900 vs. P8018. Other than Stanford's explanation, P8900 is a Core 17 unit which means that it uses QRB (Quick Return Bonus) which means the faster the unit completes the more points you get. P8018 is a Core 15 unit which is older and doesn't have QRB. In the competition view, a Core 17 is much better (unless you have a card that is more capable on Core 15s) than a Core 15. Also Core 15 is NVIDIA only, Core 17 is both AMD and NVIDIA capable, and Core 16 is AMD only.


----------



## shlunky

Quote:


> Originally Posted by *dman811*
> 
> Swap to the Beta tag, you will more than likely get an 8900 right away.


Swapped it back. Had it there for a while, but read somewhere that we should be using advanced instead which will give the same WU's, but is more "PC" if you will.
I had been on advanced for over 2 weeks, so I didn't think this should change it, but I hope it does, lol.

Thanks!


----------



## dman811

I had been on advanced for most of the Foldathon last month and swapped to beta near the end and got a P8900 instantly.


----------



## hazara

I still seem to be pulling them, in fact I think the Haswells I tested last fortnight were pulling them with no flags..
Quote:


> *********************** Log Started 2014-01-20T23:54:39Z ***********************
> 23:54:44:FS00:Set client configured
> 23:54:44:WU00:FS00:Connecting to assign-GPU.stanford.edu:80
> 23:54:45:WU00:FS00:Connecting to assign-GPU.stanford.edu:80
> 23:54:46:WU00:FS00:News: Welcome to [email protected]
> 23:54:46:WU00:FS00:Assigned to work server 171.64.65.69
> 23:54:46:WU00:FS00:Requesting new work unit for slot 00: READY gpu:0:R575A [AMD Radeon HD7700 Series] from 171.64.65.69
> 23:54:46:WU00:FS00:Connecting to 171.64.65.69:8080
> 23:54:47:WU00:FS00ownloading 4.17MiB
> 23:54:53:WU00:FS00ownload 47.92%
> 23:54:58:WU00:FS00ownload complete
> 23:54:58:WU00:FS00:Received Unit: id:00 stateOWNLOAD error:NO_ERROR project:8900 run:127 clone:7 gen:30 core:0x17 unit:0x00000032028c126651a642d9fc4e6a29
> 23:54:58:WU00:FS00:Starting
> 23:54:58:WU00:FS00:Running FahCore: "C:\Program Files (x86)\FAHClient/FAHCoreWrapper.exe" C:/ProgramData/FAHClient/cores/www.stanford.edu/~pande/Win32/AMD64/ATI/R600/beta/Core_17.fah/FahCore_17.exe -dir 00 -suffix 01 -version 703 -lifeline 4508 -checkpoint 15 -gpu 0 -gpu-vendor ati
> 23:54:58:WU00:FS00:Started FahCore on PID 4920
> 23:54:58:WU00:FS00:Core PID:4852
> 23:54:58:WU00:FS00:FahCore 0x17 started
> 23:54:58:WU00:FS00:0x17:*********************** Log Started 2014-01-20T23:54:58Z ***********************
> 23:54:58:WU00:FS00:0x17roject: 8900 (Run 127, Clone 7, Gen 30)
> 23:54:58:WU00:FS00:0x17:Unit: 0x00000032028c126651a642d9fc4e6a29
> 23:54:58:WU00:FS00:0x17:CPU: 0x00000000000000000000000000000000
> 23:54:58:WU00:FS00:0x17:Machine: 0
> 23:54:58:WU00:FS00:0x17:Reading tar file state.xml
> 23:54:58:WU00:FS00:0x17:Reading tar file system.xml
> 23:54:59:WU00:FS00:0x17:Reading tar file integrator.xml
> 23:54:59:WU00:FS00:0x17:Reading tar file core.xml
> 23:54:59:WU00:FS00:0x17igital signatures verified
> 23:57:49:WU00:FS00:0x17:Completed 0 out of 2500000 steps (0%)
> 00:08:25:WU00:FS00:0x17:Completed 25000 out of 2500000 steps (1%)
> 00:18:39:WU00:FS00:0x17:Completed 50000 out of 2500000 steps (2%)


----------



## dman811

Ya flags aren't completely needed, my GTS 450 and a few of the GT 430s I run at school can pull them without flags, but it isn't a guarantee. Anyways, they do better with 76xx units.


----------



## derickwm

Any way to get better GPU usage? Just got my 780 Lightnings up and running and the GPU usage % is bouncing between 90-100 consistently. I have folding power set to "full".


----------



## Wheezo

Drop a CPU core or use a program like Process Tamer to set priority of the services.
Process Tamer: http://www.donationcoder.com/Software/Mouser/proctamer/


----------



## derickwm

Not folding on the CPU.


----------



## arvidab

Get faster CPU's. Getting a usage of 96-99% with my 780Ti with a 3770K. Got something like 86-92% on my old Athlon II rig. Your X5680 should be fast enough though. You won't get a constant 100% usage, that's just how it works.

Disable SLI? I've heard people claiming SLI can be enabled now, but I don't know, worth a shot.


----------



## derickwm

Oh lul this is just on my temp 3930k/RIVE setup.










It's at stock clocks, maybe I'll bump that up. /lazy

I'll disable SLI and see if anything improves.

Thanks.


----------



## dman811

Are you folding with the beta or advanced tags on derick? Advanced has been giving people a lot of core 15 units lately, so if that is the case, use the beta tag instead, if you aren't using either, try opening "Configure" and then the "Slots" tab, then select "gpu", then click "Add" then in the field "Name" type "client-type" and in the "Value" field type "beta", then click "OK", and "OK" again, then "Save". If you aren't seeing as many points as you would like (if you are in fact getting core 17 units, then it means you either need to set up a passkey or if you have set up the passkey, then you haven't folded 10 WUs yet.


----------



## derickwm

Nah I have beta on and passkey is more than ready. Once this unit finishes up I'll see what the next one does. Thanks guys, been a long time I've done much folding, especially GPU folding.


----------

