# 11th BOINC Pentathlon | May 5th-19th 2020



## tictoc

*Pentathlon Homepage* || *Pentathlon Stats* || *Pentathlon Rules* || *Download BOINC Client*​ 
*What is this event all about?*

The BOINC Pentathlon is a two week long BOINC team competition. Five different projects will be crunched over the two week period.

The BOINC Pentathlon consists of 5 disciplines:


Marathon (14 days)
Sprint (3 days)
City Run (5 days)
Cross Country (5 days)
Javelin (5x1 day, only each team's third best daily score counts)

The fun and challenging aspect of the Pentathlon is resource management. The 5 "disciplines" are run over a 14 day period, so each "discipline" overlaps another discipline. Figuring out what to run, and when to run it, is what makes the Pentathlon a unique and challenging BOINC points race.


 Run-times and projects for each discipline are announced 5 days (Marathon and one other project) or 3 days (Sprint and one other project) before their respective start. Each day of Javelin Throw will be announced three days in advance. Announcements can be found on the, main page, via Blog, or Feed .
 This allows some time to stock up on completed WU's, but it can also be dangerous if you forget to turn in the completed tasks before the deadline. It also adds another twist to resource management, if you are trying to stockpile WU's, while simultaneously running the current active disciplines.
 

*New to BOINC?*

Check out the BOINC Essentials Thread for information about BOINC, how to install and use the BOINC client, and what projects are available on the BOINC platform.

*
Promote the BOINC Pentathlon in your forum signature!*
:boxing3: *11th BOINC Pentathlon - May 5th-19th, 2020** :boxing3:* 

BBCode:


HTML:


[CENTER]:boxing3:[URL=https://www.overclock.net/forum/18057-boinc-staff/1746216-11th-boinc-pentathlon.html#post28418584][B]11th BOINC Pentathlon - May 5th-19th, 2020[/B][/URL]:boxing3:[/CENTER]

 

*Disciplines to crunch:*

*OCN Final Overall Ranking: TBD

Marathon: [email protected]*
*OCN Final Ranking - TBD*
Starts: *5/5*
Ends: *5/19*
*Project Support Thread*

*Sprint: Ibercivis*
*OCN Final Ranking - TBD*
Starts: *5/16*
Ends: *5/19*
*Project Support Thread*

*City Run: [email protected]*
*OCN Final Ranking - 10*
Starts: *5/7*
Ends: *5/12*
*Project Support Thread*

*Cross Country: Amicable Numbers*
*OCN Final Ranking - TBD*
Starts: *5/12*
Ends: *5/17*
*Project Support Thread*

*Javelin Throw: [email protected]*
*OCN Final Ranking - TBD*
Starts: *Various Start Times*
Ends: *Various End Times*
*Project Support Thread*

*Project Choosing Rules*

*The biggest change to the Pentathlon this year is how the projects will be chosen.*

The projects for this year's Pentathlon will be chosen by the event organizers
Projects from last year's Pentathlon are eligible for this year's Pentathlon
There may be more than one GPU project

*To be eligible for the prize drawing and to have your individual stats tracked, sign up and fill out the form at the following link:*
https://docs.google.com/forms/d/e/1...xHbFX9Aj59Dt9B8AQO0GFnJw/viewform?usp=sf_link

*Prizes Being Donated for the Pentathlon:*
*1x $75 Paypal - Donated by Overclock.net
3x $50 Paypal - Donated by Overclock.net
6x $25 Paypal - Donated by Overclock.net
*


----------



## tictoc

*Link to individual OCN team member stats:*

https://docs.google.com/spreadsheet...3-HMM6obnLDurI3x4PvjQSU0I/edit#gid=1645089971


*Edit* Changed link to new sheet


----------



## tictoc

Less than two weeks until the start of the 2020 Pentathlon. :wheee:


----------



## franz

Signed up :wheee:


----------



## McPaste

I'm in again this year.


----------



## Deedaz

Yay pentathlon time! I've been running rosetta for the covid projects. Hopefully there's a big push for rosetta.


----------



## Jpmboy

I'll try to help where I can... I never did get that bunkering thing tho.


----------



## SuperSluether

Wow, I haven't been here in so long. I've been on-and-off crunching, but I think I'm back to full-time status for a while. I just got a gaming laptop from a friend, and it almost out-specs my desktop!

If it's not too off-topic, how's everyone doing? Seems like it's been really quiet, with a lot of inactive/lurking users.


----------



## DarthBaggins

I'm in - lets put my 6900k, 3770, 4770, 5930k's to work


----------



## fragamemnon

Sub!


----------



## tictoc

Jpmboy said:


> I'll try to help where I can... I never did get that bunkering thing tho.


Not a big deal, any work done goes to the total. :thumb:



Deedaz said:


> Yay pentathlon time! I've been running rosetta for the covid projects. Hopefully there's a big push for rosetta.


With the projects being picked by the organizers, it will be interesting to see what we are running.



SuperSluether said:


> Wow, I haven't been here in so long. I've been on-and-off crunching, but I think I'm back to full-time status for a while. I just got a gaming laptop from a friend, and it almost out-specs my desktop!
> 
> If it's not too off-topic, how's everyone doing? Seems like it's been really quiet, with a lot of inactive/lurking users.


Traffic on OCN has gone way down in the last two years. Glad to see you back for the Pent. 



DarthBaggins said:


> I'm in - lets put my 6900k, 3770, 4770, 5930k's to work


 :cheers:


----------



## bfromcolo

woo hoo!


----------



## k4m1k4z3

Sure! I'll throw a couple CPUs and GPUs at this


----------



## DarthBaggins

Now I need to find or just make mounting hardware to slap a 120mm AiO on the 3770 since the little cooler on it will not hold up (even for a locked cpu, that sucker gets warm). Also I guess I might try a delid on it and the 4770 - might as well practice on those at least.


----------



## Diablosbud

So the marathon project is Rosetta. Be careful to monitor your RAM use everyone, some Rosetta tasks can take up to 2 GB lately. It seems to be the COVID tasks in particular.

With many cores it easily adds up to a system's worth of RAM and can cause your tasks to error from "out of memory". But this has only happened to me running many COVID tasks at once.

An easy way to deal with this is to limit RAM use in BOINC, I'm limiting it to 80% on my gaming computer and 90% on my dedicated BOINC computer.


----------



## franz

Diablosbud said:


> So the marathon project is Rosetta. Be careful to monitor your RAM use everyone, some Rosetta tasks can take up to 2 GB lately. It seems to be the COVID tasks in particular.
> 
> With many cores it easily adds up to a system's worth of RAM and can cause your tasks to error from "out of memory". But this has only happened to me running many COVID tasks at once.
> 
> An easy way to deal with this is to limit RAM use in BOINC, I'm limiting it to 80% on my gaming computer and 90% on my dedicated BOINC computer.


I had this issue awhile back running 6 cores on my 4790K with 4GB, came home one day and my HDD was running endlessly. Looked into the process manager and it was writing to the page file like crazy. So I upgraded the RAM! Using 8GB and havent had an issue yet.


----------



## Diablosbud

franz said:


> I had this issue awhile back running 6 cores on my 4790K with 4GB, came home one day and my HDD was running endlessly. Looked into the process manager and it was writing to the page file like crazy. So I upgraded the RAM! Using 8GB and havent had an issue yet.


I should definitely upgrade the RAM on my BOINC computer. It's an R7 1700 with 8 GB of RAM. I built it that way because outside of the Pentathlon I only run WCG Mapping Cancer Markers on it, which takes very little RAM.

As for my gaming PC, it's an R5 3600 with 16 GB of RAM. That's pretty well rounded, but occasionally it was getting too many COVID units to handle. It seems okay at the moment for using all 12 threads, though.


----------



## tictoc

Making the Marathon thread now, but heads to everyone that if you start piling up tasks now, they won't count for the Pent. 

*Current deadline for all of the tasks that I already had and the ones I just grabbed is 5/2.*


----------



## franz

Diablosbud said:


> I should definitely upgrade the RAM on my BOINC computer. It's an R7 1700 with 8 GB of RAM. I built it that way because outside of the Pentathlon I only run WCG Mapping Cancer Markers on it, which takes very little RAM.
> 
> As for my gaming PC, it's an R5 3600 with 16 GB of RAM. That's pretty well rounded, but occasionally it was getting too many COVID units to handle. It seems okay at the moment for using all 12 threads, though.


Thanks for the heads up. I just added a R7 1700 rig to my arsenal. I will have to keep an eye on that after its all up and running.


----------



## Jpmboy

@tictoc

Dude - I'm gonna need a tutorial on how to "productively" contribute to this (again).


----------



## tictoc

Jpmboy said:


> @*tictoc*
> 
> Dude - I'm gonna need a tutorial on how to "productively" contribute to this (again).



I'll put something together later today, with the different ways to stockpile tasks after the project is announced. There are a few ways to do that, that range from dead simple to a bit more complicated.


----------



## tictoc

First post in the daily summary is up: https://www.seti-germany.de/forum/content/1211-BOINC-Pentathlon-2020-Crunch-time-again!


----------



## mmonnin

I wonder how Rosetta is going to handle the Pent. The project output is already several times higher in April than Feb with the Covid work. Add in a competition is going to make those servers scream for mercy.


----------



## tictoc

I had been running it for a few weeks, and didn't notice any issues. 

They do have at least one decent file server:


Rack mounted 4U SuperMicro server
Specs: Dual Intel Xeon E5-2640 v4 @ 2.40GHz, 256 GB RAM, X10DRD-IT, 2 x 10 GbE
Storage: 72 x 1TB SSD via LSI SAS 9207-8i
File system: ZFS on Linux v0.7 (raidz2, 9 vdevs with 8 disks) served via NFSv4
OS: Ubuntu Server 16.04
I couldn't find any other info about additional back-end hardware, but hopefully ProtonMail, Rackspace, and other large new donors are helping on the back-end in addition to just running BOINC in their datacenters.


----------



## mmonnin

Nice. A lot better than some projects that seem to run from someones closet. I wish their applications get that kind of work. It's been mentioned before it just keeps getting built upon and added to over and over again. I've always had more luck with the mini app vs main Rosetta app. I have it set for 1 hour but quite often tasks run for 5-6 hours then fail. When I do run the project, I end up aborting all Rosetta and run Rosetta mini as it can't be selected in user preferences.


----------



## tictoc

The first leg of the Javelin throw will start at the same time as the Marathon, and the project is [email protected] https://numberfields.asu.edu/NumberFields/


----------



## mmonnin

I guess NF is technically a CPU/GPU project. When they released their GPU app the credit earned was a function of run time so running 2x tasks earned twice the credit and so forth. Then instead of a fixed credit or some calculation of flops performed the admin went to CreditNew and the project deflated.


----------



## tictoc

Credits look pretty low for the GPU. Not sure which is batter to run GPU or CPU. Is this project like Asteroids, where I good CPU can outproduce a good GPU?


*Edit* Just posted the thread for the Javelin Throw: https://www.overclock.net/forum/365...numberfields-home-project-support-thread.html


----------



## tictoc

Maybe I can answer my own questions. 




> "I would say you're not at a disadvantage if you run CPU only. My 32 core Threadripper is wracking up more credit than my 3 GPUs combined, but of course it all depends on your particular cpu/gpu combination."


https://numberfields.asu.edu/NumberFields/forum_thread.php?id=416&postid=2651#2651


----------



## bfromcolo

Are you not planning to run NF on your GPUs? It's not a great GPU project for points, but looking at stats on the web site, a GPU out produces an individual CPU thread.


----------



## mmonnin

It's CreditScrew so credit is low and isn't exactly determined by run time. Per watt, a CPU is probably better but if this is the only GPU project running it might as well be ran.


----------



## tictoc

I agree with @mmonnin, my CPU can out produce my GPUs, but the GPUs will just add to the total.


----------



## franz

Can someone post a guide or a link on how you all are stockpiling projects and such? In the past I had just one rig, so I didnt really look into it that much.


----------



## mmonnin

Setup multiple clients on one PC. Fill the queue. When one queue is about to complete, fire up another client. Repeat.
https://www.overclock.net/forum/180...uide-setting-up-multiple-boinc-instances.html


----------



## tictoc

^^ That is the most fool-proof way.


----------



## spdaimon

I'm in.


----------



## tictoc

[email protected] is the City Run project. It runs May 7th-12th.


----------



## Diffident

I run Universe on all rpi's. I wonder how much bunkering I can do on them.


----------



## tictoc

Diffident said:


> I run Universe on all rpi's. I wonder how much bunkering I can do on them.



Universe WUs are small, so you shouldn't run into any storage issues. Here's a quick example from one of my machines. This is x86_64 but I imagine if anything the ARM tasks would be smaller.



450 Completed Tasks
390 Queued tasks
Total space used (including Universe binaries) = 665 MB


----------



## bfromcolo

Now that I have 3 projects running, if the powers that be decided to release another project (like a GPU one) I would need to block uploads or add additional clients. Does anyone have the URLs to add to the host file to block uploads just in case?


----------



## tictoc

bfromcolo said:


> Now that I have 3 projects running, if the powers that be decided to release another project (like a GPU one) I would need to block uploads or add additional clients. Does anyone have the URLs to add to the host file to block uploads just in case?



Check the OPs of the project support threads. :thumb:


----------



## Diffident

I'm bunkering on all 8 rpi's. They all have 32GB SD cards, so they all have plenty of storage. 

Considering it's only a Pi, it's not a big bunker. My 8 rpi's are good for a total of around 110k ppd.


----------



## franz

I ended up nuking my Ubuntu 20.04 install(18.04FTW) on my new crunching rig, so I didnt have a chance to try any bunkering on any of my rigs. I will be up and running 100% by the 5th, I hope. I will have have the 1700X,4790K,2500K committed to rosetta and crunch number fields on the GPUs. I may split the 1700x between the 2 projects if I run into RAM issues.


----------



## valvehead

I'm in, though I may not be able to participate during the entire event.


----------



## tictoc

valvehead said:


> I'm in, though I may not be able to participate during the entire event.


:cheers:
It all goes to the total, so thanks for any crunching you can do.


----------



## bfromcolo

From what I am reading should be good to unleash those Rosetta and Number Fields bunkers.


----------



## tictoc

I am always nervous about letting my tasks go, even after we get the all clear from SETI.Germany.


----------



## tictoc

Prizes added to the OP. :applaud:



Per-user stats will be posted sometime after the next hourly update. My stats might be a little out of sync with the official stats. If the official stats server is running on the same clock as the countdown on the Pentathlon homepage, then it is running about 3-4 seconds slow. I adjusted my time for pulling the baseline, so we'll see if I guessed right in about 10 minutes.


----------



## Jpmboy

signed up.


----------



## Jpmboy

hey guys, would the app config file to limit rosetta to 18 threads be:

<app_config>
<project_max_concurrent>18</project_max_concurrent>
</app_config>


----------



## tictoc

Jpmboy said:


> hey guys, would the app config file to limit rosetta to 18 threads be:
> 
> <app_config>
> <project_max_concurrent>18</project_max_concurrent>
> </app_config>


That's it. :thumb:


----------



## Jpmboy

tictoc said:


> That's it. :thumb:


okay. What I'm trying with this mod client is to have MW run on the 3 TVs and Rosetta on the CPU. The Ros CPU tasks do not seem to interfere with the MW download stream (as MW CPU does), so I'm seeing if I can run both projects at the same time. Left to it's own, MW can only run 2 tasks with Ros running at the same time... lets see if this will work (and if the PSU can keep up!)


----------



## Genesis1984

Signed up last night. Been a while since I've heard much boinc group activity on OCN. It's good to be here.

Just in time, too. Swapped my 8320e for a 3900x and my 7950 for a 5700 this last weekend.


----------



## tictoc

Genesis1984 said:


> Signed up last night. Been a while since I've heard much boinc group activity on OCN. It's good to be here.
> 
> Just in time, too. Swapped my 8320e for a 3900x and my 7950 for a 5700 this last weekend.



Welcome back stranger. :cheers:
The team is a bit smaller now, but we are still here crunching away.


----------



## tictoc

I hacked together some individual user stats. There is just a link to the Google sheet, since we can't embed GDocs anymore.
https://docs.google.com/spreadsheet...j4forv1JX_4QJFQyYHZe2BR4i0/edit#gid=845616243


**Edit** I'll have to tweak this a little, because Google has once again broken things that used to work. The sheet doesn't want to update to the new .csv data that gets updated every hour, unless I manually edit and then fix the link to the .csv file.


----------



## Jpmboy

Hey guys - a little AMD help here: So this 2700X defaults to 8 threads for Rosetta (and NF for that matter) Which results in 50% avg CPU usage. This is pretty hot on the CPU (like 68C Tctl). Should I increase the thread count to rosetta via the app config file or is this an AMD limit thing?
The rig has 32GB ram, so plenty available for rosetta.


----------



## tictoc

You should be able to run on all threads. Check Options->Computing Preferences->Computing and make sure that it's not set to 50%. I'm currently running 12 tasks on my 1700 with SMT on, and 23 tasks on my 2700WX with SMT off. I use an app_config to adjust the number of tasks to run at one time, so this should work on your machine, unless there are other things changed behind the scenes on that modified client.


----------



## bfromcolo

Demonstrate my ignorance I guess. I thought Tdie was the actual temperature and Tctl had an additional value (18) added to it? So a Tctl of 68C is really 50C Tdie and your temps are fine. It was not until some recent kernel that Tctl started showing up in psensor, I have been managing my processors using Tdie.
Or maybe I am just cooking my processor...
Why does the 2700x default to 8 threads instead of 16? I have 2 1700x and they default to 16. If what TicToc was saying about PPD being similar with smt disabled, not sure I would expect a big difference in temps, but I have not done any testing.


----------



## tictoc

bfromcolo said:


> Demonstrate my ignorance I guess. I thought Tdie was the actual temperature and Tctl had an additional value (18) added to it? So a Tctl of 68C is really 50C Tdie and your temps are fine. It was not until some recent kernel that Tctl started showing up in psensor, I have been managing my processors using Tdie.
> Or maybe I am just cooking my processor...
> Why does the 2700x default to 8 threads instead of 16? I have 2 1700x and they default to 16. If what TicToc was saying about PPD being similar with smt disabled, not sure I would expect a big difference in temps, but I have not done any testing.


Tdie is the actual temp on Ryzen CPUs. The offset with Zen and Zen+ CPUs varies per CPU model, so that consistent fan profiles can be set. I think Tctl is +10C for the 2700x, and I know it is +27C for my 2700WX.


----------



## Jpmboy

tictoc said:


> You should be able to run on all threads. Check Options->Computing Preferences->Computing and make sure that it's not set to 50%. I'm currently running 12 tasks on my 1700 with SMT on, and 23 tasks on my 2700WX with SMT off. I use an app_config to adjust the number of tasks to run at one time, so this should work on your machine, unless there are other things changed behind the scenes on that modified client.





bfromcolo said:


> Demonstrate my ignorance I guess. I thought Tdie was the actual temperature and Tctl had an additional value (18) added to it? So a Tctl of 68C is really 50C Tdie and your temps are fine. It was not until some recent kernel that Tctl started showing up in psensor, I have been managing my processors using Tdie.
> Or maybe I am just cooking my processor...
> Why does the 2700x default to 8 threads instead of 16? I have 2 1700x and they default to 16. If what TicToc was saying about PPD being similar with smt disabled, not sure I would expect a big difference in temps, but I have not done any testing.





tictoc said:


> Tdie is the actual temp on Ryzen CPUs. The offset with Zen and Zen+ CPUs varies per CPU model, so that consistent fan profiles can be set. I think Tctl is +10C for the 2700x, and I know it is +27C for my 2700WX.


Thanks Guys. I have not used an app_config on that x470 rig for rosetta yet, but I will now. Rosetta config on the web is set to 90% of CPUs - and that is not the mod client on that machine, so I have no idea why it chose 8 threads. I'll bump in to 12 and see what happens to the temps. I've moved off NF and am still dialing in the rosetta 7980XE threads on the MW machine (without cutting into MW hopefully, which uses 11 threads for 21 MW tasks) - wantr to run MW and Rosetta simultaneously. 10890XE is running Rosetta on 32 threads - gotta watch the VRM temps on that as it's pulling over 300W just on the CPU 12V lines. :worriedsm


----------



## tictoc

Jpmboy said:


> Thanks Guys. I have not used an app_config on that x470 rig for rosetta yet, but I will now. Rosetta config on the web is set to 90% of CPUs - and that is not the mod client on that machine, so I have no idea why it chose 8 threads. I'll bump in to 12 and see what happens to the temps. I've moved off NF and am still dialing in the rosetta 7980XE threads on the MW machine (without cutting into MW hopefully, which uses 11 threads for 21 MW tasks) - wantr to run MW and Rosetta simultaneously. 10890XE is running Rosetta on 32 threads - gotta watch the VRM temps on that as it's pulling over 300W just on the CPU 12V lines. :worriedsm



You might also need to double check the amount of memory allowed. I think I have all mine set to 100%, just to make sure that the client behaves.


----------



## mmonnin

The NF tasks by default are set to require 0.957 CPU threads. When running more than 1 CPU they will add up over 1 and reduce the number of CPU tasks allowed to run. By default for every N # of NF tasks there are N-1 CPU tasks allowed to run.


----------



## Jpmboy

okay - I think I got this rolling along without any "events" thus far. Finally got the 2700X to run more than 8 threads... tho it chose to finish up the NF tasks at this point. I better not let my wife see this work table...


----------



## Jpmboy

hey guys - is it better to run rosetta with HT or SMT off?


----------



## tictoc

I haven't ran Rosetta enough to know which is better for max production. HT/SMT off will free up some memory, but I don't know for sure if it performs better. I think when I looked at this a few years ago, there were only a few projects that benefited from HT, and I don't remember Rosetta being one of them. 

Although, that would have been on a different application version, so really, I guess I don't have any good info to give.


----------



## bfromcolo

So I can unload all these Universe now? And no new project yet?


Or Number Fields is once again the Javelin project...


----------



## tictoc

Yes on the Universe tasks, and the next day of Javelin (NumberFields) starts in three days.


----------



## Diffident

I'm not liking the Javelin. I still have unfinished tasks of Numberfields left that I was just going to bunker for the second day, but they all expire the day before.


----------



## tictoc

Diffident said:


> I'm not liking the Javelin. I still have unfinished tasks of Numberfields left that I was just going to bunker for the second day, but they all expire the day before.



I ended up having to abort about half of the tasks I had because of the deadline. Building back up the queue now.


----------



## Jpmboy

tictoc said:


> *Link to individual OCN team member stats:*
> https://docs.google.com/spreadsheet...j4forv1JX_4QJFQyYHZe2BR4i0/edit#gid=845616243


Hey tictoc, what data is the link in the post above pointing to? I'm burning down the house and not getting anywhere. I think I signed up correctly :worriedsm


----------



## franz

Jpmboy said:


> okay - I think I got this rolling along without any "events" thus far. Finally got the 2700X to run more than 8 threads... tho it chose to finish up the NF tasks at this point. I better not let my wife see this work table...


What did you do to run more than 8? Running [email protected] on my 1700X and she wont go over 12 threads no matter what CPU% I change it to. My other rigs I see the extra cores being used but the 1700X aint happy for some reason. No other projects being run on that machine at the moment.


----------



## Jpmboy

franz said:


> What did you do to run more than 8? Running [email protected] on my 1700X and she wont go over 12 threads no matter what CPU% I change it to. My other rigs I see the extra cores being used but the 1700X aint happy for some reason. No other projects being run on that machine at the moment.


I had to use an app_config.xml file in the folder in the pic below for rosetta, you'd put it in the similar Universe folder.

The app_config file in *this post*, just change 18 to the number of threads you want Rosetta to use.


----------



## mmonnin

franz said:


> What did you do to run more than 8? Running [email protected] on my 1700X and she wont go over 12 threads no matter what CPU% I change it to. My other rigs I see the extra cores being used but the 1700X aint happy for some reason. No other projects being run on that machine at the moment.


What's the memory usage situation on that PC and how much memory is BOINC set to use. There is a % number when idle and another when in use.


----------



## franz

Jpmboy said:


> I had to use an app_config.xml file in the folder in the pic below for rosetta, you'd put it in the similar Universe folder.
> 
> The app_config file in *this post*, just change 18 to the number of threads you want Rosetta to use.


Okay I will try that. I thought that was more to set a limit but hopefully it does something.



mmonnin said:


> What's the memory usage situation on that PC and how much memory is BOINC set to use. There is a % number when idle and another when in use.


16GB which is why I put it on the Universe project, Rosetta was kicking its ass. Its using around 7-8GB with 12 universe projects running. I already increased mem usage to 90use/95idle but that didnt help.


----------



## tictoc

Jpmboy said:


> Hey tictoc, what data is the link in the post above pointing to? I'm burning down the house and not getting anywhere. I think I signed up correctly :worriedsm



You are there, and cranking out a bunch of Rosetta work. That GDoc (when it updates) is only tracking stats for Rosetta and NumberFields. Unfortunately Universe has the whole GDPR user consent thing, which makes it impossible for me to grab stats in any kind of an automated and sane way.  Google has also made things difficult. I have to manually edit the source sheet to get it to update, since it is doing some sort of caching behind the scenes, rather than updating the sheet with the new source data every hour.


Here's a link to the source file for those stats. https://drive.google.com/open?id=1MGXXscjuzo4tPXN5VIUMWLXy0hwmKt4s It is a csv file that updates automatically every hour.


----------



## Jpmboy

tictoc said:


> You are there, and cranking out a bunch of Rosetta work. That GDoc (when it updates) is only tracking stats for Rosetta and NumberFields. Unfortunately Universe has the whole GDPR user consent thing, which makes it impossible for me to grab stats in any kind of an automated and sane way.  Google has also made things difficult. I have to manually edit the source sheet to get it to update, since it is doing some sort of caching behind the scenes, rather than updating the sheet with the new source data every hour.
> 
> 
> Here's a link to the source file for those stats. https://drive.google.com/open?id=1MGXXscjuzo4tPXN5VIUMWLXy0hwmKt4s It is a csv file that updates automatically every hour.


Ah, okay. I'll click on over to Free-DC to check if this gear is actually doing anything beyond heating my office. Luckily, it's been unseasonably cool here in PA the past few days.


----------



## tictoc

Jpmboy said:


> Ah, okay. I'll click on over to Free-DC to check if this gear is actually doing anything beyond heating my office. Luckily, it's been unseasonably cool here in PA the past few days.



You are definitely doing more than just heating up the office. Your contribution on Rosetta is worth three places in the marathon standings. :cheers:


----------



## Jpmboy

So I'll just backstop Rosetta... easy.


----------



## bfromcolo

Amicable Numbers for Cross Country. GPU and CPU project, but its not giving enough GPU work to build a decent bunker so far.


----------



## tictoc

bfromcolo said:


> Yes a memory error sounds right, I have 2 VMs and some unniverse also running. Been trying to adjust the kernel size which is supposed to reduce UI latency and memory usage, but it isn't having much impact on either that I can tell.
> 
> 
> Do you have a URL to block for uploads, even it it turns out to only be 20 per GPU?



I just posted the Cross Country thread. IPs are in the OP. https://www.overclock.net/forum/365...country-amicable-numbers-project-support.html 

@*bfromcolo* @*Diffident* Any objections to me moving your Amicable posts to that thread? I have a feeling that there will be some teething issues getting this project going for everyone.


----------



## Diffident

move them


----------



## Starbomba

Well, well, well, at long last I'm here. Been crunching Rosetta on my X79 since the beginning, but now that work has let me go for a couple days, i'll set up my GPUs for Amicable. Let's see what we get.


----------



## Genesis1984

My points on the spreadsheet seem stuck at a value of 7 across all the updates, but I've been crunching away at rosetta (and universe) since the beginning of the pent. Did I do something wrong?


----------



## tictoc

Genesis1984 said:


> My points on the spreadsheet seem stuck at a value of 7 across all the updates, but I've been crunching away at rosetta (and universe) since the beginning of the pent. Did I do something wrong?


That 7 points is for Numberfields, there is another tab on the sheet that shows Rosetta. You are currently at 75,584 76,519 for for Rosetta. 

I can't track the individual user stats for Universe, because of the GDPR rules that Universe has for stats.


----------



## Genesis1984

tictoc said:


> That 7 points is for Numberfields, there is another tab on the sheet that shows Rosetta. You are currently at 75,584 76,519 for for Rosetta.
> 
> I can't track the individual user stats for Universe, because of the GDPR rules that Universe has for stats.


Doh! I completely missed there were 2 sheets.

Thank you!


----------



## tictoc

Genesis1984 said:


> Doh! I completely missed there were 2 sheets.
> 
> Thank you!



Soon to be three, since I can grab the Amicable stats.


Time to get those GPUs cooking. :devil:


----------



## franz

I splurged on more RAM for 2 of my rigs over the weekend and it should be here tomorrow. Should be able to run all cores with Rosetta and Amicable(gpu only) soon.

So much for my budget boinc builds....:wheee:


----------



## tictoc

franz said:


> I splurged on more RAM for 2 of my rigs over the weekend and it should be here tomorrow. Should be able to run all cores with Rosetta and Amicable(gpu only) soon.
> 
> So much for my budget boinc builds....:wheee:



:cheers:


----------



## Jpmboy

So I've been running Rosetta (and MW as usual on DP capable GPUs)... is it helpful to keep on rosetta or better to move cpu cores to NF during the up coming javelin segments? How about amicable? Anyone run Rosetta and Amic on a rig at the same time?

pushing this one moderately at about 290W on the CPU. To run amic on 2080Tis I think I'd have to cut back on the rosetta threads (currently 32 of 36)?


----------



## tictoc

Jpmboy said:


> So I've been running Rosetta (and MW as usual on DP capable GPUs)... is it helpful to keep on rosetta or better to move cpu cores to NF during the up coming javelin segments? How about amicable? Anyone run Rosetta and Amic on a rig at the same time?
> 
> pushing this one moderately at about 290W on the CPU. To run amic on 2080Tis I think I'd have to cut back on the rosetta threads (currently 32 of 36)?


I would say keep pushing on Rosetta if you're cool with that. I just moved a few things over to Rosetta and NumberFields. I am running Rosetta and 4 GPUs on the same machine, but it has 128GB of memory. Currently using 68GB of memory.
I am also probably going to dial that back from 40 Rosetta tasks, because even with 128GB of memory if I pick up a majority of the bigger tasks I won't have enough free memory.


----------



## bfromcolo

Amicable numbers is running now. Warning it requires 8G of system memory per task, so I had to limit all but one of my systems to one task each. But with all the GPU power on OCN I would think this would be one of our better events.


----------



## Jpmboy

Hey guys - anyone ever see work units like the 2 highligted in yellow in the pic below? The 2 have been at 10min left to complete for the past 24 hours. Any ideas? Should I abort these? Both are running on 64GB systems with 30GB or less of ram in use, so it's not like there is a ram issue. Both are the same WU name.
Vexing.


----------



## tictoc

Jpmboy said:


> Hey guys - anyone ever see work units like the 2 highligted in yellow in the pic below? The 2 have been at 10min left to complete for the past 24 hours. Any ideas? Should I abort these? Both are running on 64GB systems with 30GB or less of ram in use, so it's not like there is a ram issue. Both are the same WU name.
> Vexing.



There's a post on the Rosetta forum about the same type of task. No input from the devs. https://boinc.bakerlab.org/rosetta/forum_thread.php?id=13939


I haven't run into any of those. I did have a few really long runners that used 4 GB of memory, but they completed in about 1 day.


----------



## Jpmboy

Seems they simply timed-out as past due date. Hopefully I never see any of those WUs again. :aaskull:


----------



## Jpmboy

I squeezed 2 Amicable Numbers WUs on to this rig... runs about 60% package load on each 2080Ti. Pulling some serious power..


----------



## mmonnin

That's Rosetta for you. I always used to have them timeout after 6 hours (1 hour tasks). I had one this Pent that ran for over a day before I aborted it.


----------



## tictoc

The other stats sheet is now completely broken, and won't pull down any data.
Here's a link to a new sheet, that hopefully doesn't break.

OCN Individual User Stats - https://docs.google.com/spreadsheet...3-HMM6obnLDurI3x4PvjQSU0I/edit#gid=1645089971


----------



## Jpmboy

ah sheet. when I added amicable I forgot to set OCN as my team. 1.8M points lost yesterday. I'll check that I show up on the team later today.


----------



## tictoc

It's all good now. You'll be on the next update. :thumb:


----------



## tictoc

Sprint project is Ibercivis https://boinc.ibercivis.es/ibercivis/

It looks like I ran a few tasks back in 2013.


----------



## tictoc

Actually it looks like the old Ibercivis is gone. Everyone will have to join the team. https://boinc.ibercivis.es/ibercivis/team_display.php?teamid=27



Thanks @mmonnin for creating the team last month.


----------



## bfromcolo

No tasks available...


----------



## Jpmboy

does this look like something id wrong? I mean the GPUs have 12GB of vram each, but three amicable WUs across 3 cards is using hardly any, but consuming 31GB of PC ram?


----------



## Diffident

I've been able to add the project on 2 machines, trying to add it to my 2P crashes boincmanger for some reason.


----------



## tictoc

Jpmboy said:


> does this look like something id wrong? I mean the GPUs have 12GB of vram each, but three WUs across 3 cards is using hardly any, but consuming 31GB of PC ram?



Nothing you did wrong. I'm guessing you have (4) 8GB sticks in that rig? The memory usage is system memory (8GB) rather than vRAM. It is a a somewhat common practice in HPC for tasks that need to do some calculations on the CPU, to load everything into system memory to feed the GPU as fast as possible.


I'm not sure if they have really optimized Amicable to that extent, but it seems possible. The required system memory usage makes it a bear to run on most "regular" multi-GPU systems.


----------



## Jpmboy

tictoc said:


> Nothing you dod wrong. I'm guessing you have (4) 8GB sticks in that rig? The memory usage is system memory (8GB) rather than vRAM. It is a a somewhat common practice in HPC for tasks that need to do some calculations on the CPU, to load everything into system memory to feed the GPU as fast as possible.
> 
> 
> I'm not sure if they have really optimized Amicable to that extent, but it seems possible. The required system memory usage makes it a bear to run on most "regular" multi-GPU systems.


yeah, "only" 32GB of ram on that rig. Page file is getting hammered (intel 900P). 
The other HCC rig has 64GB of ram and 2 GPUs, it can keep 32 threads of rosetta running + 2 amicable. Oh well, roseta will take a hit for amic. Then the TVs go back to MW.


----------



## tictoc

At least you have a fast page file. lol


I actually have to do a big change on my machines, because right now I had to stop a few GPUs because I was maxxing out my UPS's. Going to have to take my old 2970WX rig and just plug it in the non-battery backed side until Cross Contry is over. Wish I still had a few more NVIDIA GPUs to throw at Amicable, but since the PCIe slots are open, in go some AMD GPUs.


----------



## Jpmboy

tictoc said:


> At least you have a fast page file. lol
> 
> 
> I actually have to do a big change on my machines, because right now I had to stop a few GPUs because I was maxxing out my UPS's. Going to have to take my old 2970WX rig and just plug it in the non-battery backed side until Cross Contry is over. Wish I still had a few more NVIDIA GPUs to throw at Amicable, but since the PCIe slots are open, in go some AMD GPUs.


let's keep an eye on Rosetta. If we start dropping places quick vs AN, we'll make the switch back.


----------



## tictoc

Sounds like a plan. I have a some cores that I'm going to switch over to Rosetta once they are done with their current queue of NumberFields tasks, which should be in about 2 hours.


----------



## tictoc

Jpmboy said:


> yeah, "only" 32GB of ram on that rig. Page file is getting hammered (intel 900P).
> The other HCC rig has 64GB of ram and 2 GPUs, it can keep 32 threads of rosetta running + 2 amicable. Oh well, roseta will take a hit for amic. Then the TVs go back to MW.



You can actually display memory usage per task with BoincTasks. Extra->BoincTasks settings->Tasks tab, and there is a checkbox to display memory.


----------



## mmonnin

Ibercivis needs a min of BOINC client v7.9 to connect. 7.8 will not work.


----------



## Starbomba

Jpmboy said:


> yeah, "only" 32GB of ram on that rig. Page file is getting hammered (intel 900P).


Heh, you should see my poor, poor SSD. It's an El Cheapo, DRAMless 120 GB POS i found for $15. It was brand new 3 weeks ago, now It has 1 TB writes.

I knew i should've rebuilt my server before. Running only 8 GB RAM is not optimal at all. Now i have two NVMe SSDs and 96 GB RAM sitting on customs because the country is on lockdown until the 25th.


----------



## neyel8r

oops i'm super late to the party but maybe i can still contribute a bit toward the scores


----------



## spdaimon

FYI I currently have 30 cores working on Rosetta. Sorry that I've been a bit quiet.


----------



## Jpmboy

spdaimon said:


> FYI I currently have 30 cores working on Rosetta. Sorry that I've been a bit quiet.


Nice!


----------



## franz

Finally got my new RAM in, UPS decided to hold on to it for 2 extra days....and I fixed my 1700x being stuck on 12 cores. Apparently it was just ignoring whatever preferences I put in on the project websites, but when I went into Boinc Manager and set the preferences there it decided to work. So the 1700X rig can now handle 14cores of Rosetta and amicable on its single GPU.


----------



## Diffident

I hope my desktop can make it till the end. I have a clog somewhere in my water loop. I barely have any flow. I put a couple drops of die in my res and hardly any of it has made it through the loop.

My CPU is currently sitting at 

Tdie: +59.9°C 
Tctl: +79.9°C 

This morning when I woke up the Tdie was at 39°C like normal, but it's been climbing all day. It's good as long as it doesn't get any worse. I had this problem a couple months ago, the CPU block was totally clogged from using Pastel fluid in the past. I cleaned it as best as I could, but I couldn't get all the crud out of the micro channels....guess I'm going to have to break down and buy a new block.


My VII is in the same loop, surprisingly, the junction temp is only at +65.0°C while steady crunching Amicable.


----------



## Jpmboy

Diffident said:


> I hope my desktop can make it till the end. I have a clog somewhere in my water loop. I barely have any flow. I put a couple drops of die in my res and hardly any of it has made it through the loop.
> 
> My CPU is currently sitting at
> 
> Tdie: +59.9°C
> Tctl: +79.9°C
> 
> This morning when I woke up the Tdie was at 39°C like normal, but it's been climbing all day. It's good as long as it doesn't get any worse. I had this problem a couple months ago, the CPU block was totally clogged from using Pastel fluid in the past. I cleaned it as best as I could, but I couldn't get all the crud out of the micro channels....guess I'm going to have to break down and buy a new block.
> 
> 
> My VII is in the same loop, surprisingly, the junction temp is only at +65.0°C while steady crunching Amicable.


damn. I had an EK block do the same some time ago, I was able to clean the "jet" channels with a suede (brass) brush after soaking in white vinegar. Mine clogged with some kinda white muk. never figured out what that was.


----------



## Jpmboy

just checked this 2080ti sli rig... it's not getting any more amicable WUs?


----------



## mmonnin

Pent chat mentioned being out of work. I am not getting any either.


----------



## Jpmboy

mmonnin said:


> Pent chat mentioned being out of work. I am not getting any either.


yeah, I just checked the servers... "0 WUs ready to send"
When the titan V rig runs out, it's back on MW. Unfortunately they ran out 23h before the CC ends. :blinksmil


----------



## Diffident

Jpmboy said:


> damn. I had an EK block do the same some time ago, I was able to clean the "jet" channels with a suede (brass) brush after soaking in white vinegar. Mine clogged with some kinda white muk. never figured out what that was.



The white muk would have been plasticizer if you were using flexible tubing.


----------



## tictoc

Flaky internet today, but now it seems to be back up for good. Down to my last 31 Amicable tasks. Hopefully Francophone isn't sitting on a pile of tasks.


I had the white gunk in one of my rigs, and it came from a poor job of flushing the radiators. I always use norprene tubing and just distilled water with Mayhems Biocide and Inhibitor. If you can get the block clean I would definitely try and flush the rads with the Blitz kit, and shake the hell out of them to try and break loose any of the pastel that's hiding in the rads.


----------



## Jpmboy

Titans moved back to MW and the available threads on that 7980XE rig (22) back on Rosetta for the duration. 80oF here today... things got pretty hot!


----------



## tictoc

Right on. Maybe we can run down the brits in the Marathon now. They have just been keeping pace one spot ahead of us. Might have to move some stuff over for the Sprint, if you are back hammering Rosetta. :thinking:


----------



## Jpmboy

I'm on Rosetta... I couldn't see how to add Ibercivis via B manager 7.14?


----------



## franz

Click Add Project, you will not see ibercivis listed so paste this into the project URL box and click next: https://boinc.ibercivis.es/ibercivis/
After that its just like setting up any other project. That being said, their network seems to be overloaded and I havent been able to pull any projects in the last couple hours.


Since Amicable is down, I am going to try to bunker some Numberfields for the next run, wish me luck


----------



## tictoc

neyel8r said:


> oops i'm super late to the party but maybe i can still contribute a bit toward the scores





Jpmboy said:


> I'm on Rosetta... I couldn't see how to add Ibercivis via B manager 7.14?



I don't think it's in the list so you just have to enter the master url into the "Project URL" box.

Ibercivis master URL:


Code:


https://boinc.ibercivis.es/ibercivis/


*Edit* @franz beat me to it.


----------



## franz

tictoc said:


> I don't think it's in the list so you just have to enter the master url into the "Project URL" box.
> 
> Ibercivis master URL:
> 
> 
> Code:
> 
> 
> https://boinc.ibercivis.es/ibercivis/
> 
> 
> *Edit* @franz beat me to it.


Yep and as soon as I said I haven't pulled any projects, my 1700X grabbed 7


----------



## SuperSluether

Oof, Amicable ran one of my laptops out of memory and suffered the wrath of the OOM killer. Somehow it took the WiFi and GUI with it, and I didn't notice until it ran out of work and the fans spun down. Guess I need to cut back on Rosetta for that one. Amicable didn't show any tasks with errors, so I'm guessing BOINC was smart enough to restart them?


----------



## franz

mmonnin said:


> Setup multiple clients on one PC. Fill the queue. When one queue is about to complete, fire up another client. Repeat.
> https://www.overclock.net/forum/180...uide-setting-up-multiple-boinc-instances.html


Thanks for this. I finally had the time to set this up. I have it working on my Win10 rig and I will try the Ubuntu rigs next. Bunkering around 600 projects for Numberfields between the 3 rigs, mostly GPU, and I will fire up the next client when those downloaded tasks are complete. 

So if I understand this correctly I block the IP on all my clients. When those projects are complete, I close that instance of BOINC, start "BOINC2", allow the IP to get new projects and then block it again?


----------



## tictoc

You don't even have to block the IP. You can just set suspend network activity on the client once you have enough tasks. In the advanced view for BOINC Manager: Activity->Suspend network activity


----------



## bfromcolo

franz said:


> Thanks for this. I finally had the time to set this up. I have it working on my Win10 rig and I will try the Ubuntu rigs next. Bunkering around 600 projects for Numberfields between the 3 rigs, mostly GPU, and I will fire up the next client when those downloaded tasks are complete.
> 
> So if I understand this correctly I block the IP on all my clients. When those projects are complete, I close that instance of BOINC, start "BOINC2", allow the IP to get new projects and then block it again?



That would work but its simpler to stop each client. In BOINC manager under activity select Suspend Network Activity after downloading your bunker.


----------



## franz

tictoc said:


> You don't even have to block the IP. You can just set suspend network activity on the client once you have enough tasks. In the advanced view for BOINC Manager: Activity->Suspend network activity





bfromcolo said:


> That would work but its simpler to stop each client. In BOINC manager under activity select Suspend Network Activity after downloading your bunker.


Thanks guys, I blocked the IP on those clients because they are still crunching CPU tasks, but I will use that in the future. It would be simpler to setup a client for each project and use the suspend network feature. I will definitely be better prepared for next year!


----------



## mmonnin

franz said:


> Thanks for this. I finally had the time to set this up. I have it working on my Win10 rig and I will try the Ubuntu rigs next. Bunkering around 600 projects for Numberfields between the 3 rigs, mostly GPU, and I will fire up the next client when those downloaded tasks are complete.
> 
> So if I understand this correctly I block the IP on all my clients. When those projects are complete, I close that instance of BOINC, start "BOINC2", allow the IP to get new projects and then block it again?


AS mentioned, suspending network activity is easier. A project like BOINC Tasks can manage multiple local/remote clients from one PC so there is no need to shut down 1 client to start another.

The Manager and Client are separate. Multiple clients ran run at once with separate data directories and gui_rpc_port IDs.


----------



## franz

mmonnin said:


> AS mentioned, suspending network activity is easier. A project like BOINC Tasks can manage multiple local/remote clients from one PC so there is no need to shut down 1 client to start another.
> 
> The Manager and Client are separate. Multiple clients ran run at once with separate data directories and gui_rpc_port IDs.


Yeah Im still playing around with BOINCTasks too, tried it earlier to connect to the Ubuntu rigs but did something wrong and it froze my clients(probably did the password wrong). I will go back and give it another shot. I'm still working out in the real world and not from home, so I have had little time to get things going. Like I mentioned in my last post, I will have this all setup and working smoothly for next year.


----------



## tictoc

It can be a bit cumbersome to get all set up an running smoothly. Every year I think I'll be all ready to go for the Pent, but invariably I am scrambling to get things setup the day the Pent begins.


----------



## Jpmboy

we need moar Rosetta !


----------



## neyel8r

yep i've been hammering Ibercivis & got Rosetta going on the phone :thumb:


----------



## Jpmboy

neyel8r said:


> yep i've been hammering Ibercivis & got Rosetta going on the phone :thumb:


final push! :thumb:


----------



## tictoc

I am Sprinting to the finish. :sonic:


----------



## tictoc

tictoc said:


> I am Sprinting to the finish. :sonic:


My Sprint only lasted about 5 hours, and then my internet was down until about 10 minutes ago. 

Thanks to everyone that crunched in this year's Pentathlon. :applaud::cheers::applaud: :drunken: 

I'll be drawing prizes later tonight, so keep an eye on your inboxes for a PM.


----------



## Starbomba

Well, the BOINC gods requested a sacrifice. Thank goodness my UPS offered itself to be a tribute. But i had to be down on all my machines, as lightning storms continued and i refuse to plug my machines directly to the wall.
Despite everything, this wasn't half bad. Maybe next year we can all have more and better hardware  I'll be sure to prepare during this year, instead of waiting until March.


----------



## Jpmboy

Starbomba said:


> Well, the BOINC gods requested a sacrifice. Thank goodness my UPS offered itself to be a tribute. But i had to be down on all my machines, as lightning storms continued and i refuse to plug my machines directly to the wall.
> Despite everything, this wasn't half bad. *Maybe next year we can all have more and better hardware*  I'll be sure to prepare during this year, instead of waiting until March.


or just more participants! :thumb:


----------



## Finrond

OH. MY. GOD. I do not even begin to comprehend how I missed the pentathlon this year. Wow. I am so sorry guys :-(

With all the stuff going on I guess I just neglected to check these forums for a while.

Would have been nice to test out my 3700x


----------

