Guide to running multiple GPU work units (2024)

What is this?
ATI/AMD users will notice that by default their GPU isn't loaded to 100% and NVIDIA users may have GPU's that are capable of more compute work. This allows you to manually set how many work units you want to crunch simultaneously on your GPU.

How does it work?
You create a file called app_info.xml inside the project folder of your BOINC Data folder (normally: C:\Program Data\BOINC\Data\projects\www.worldcommunitygrid.org). The contents of this file will determine:
- How many GPU WU to run at a time
- Which WCG projects to run

Intestested! Where do I start?
The basic steps to this are:

ThE_MaD_ShOt said:

1: Create new profile on wcg site with HCC only and make it default.
2: Attach the rig you want to crunch the app_info with to the profile
3: Uninstall Boinc/Wcg
4: Delete folder under program Data
5: reboot
6: Install Wcg Client from Wcg site.
7: Reboot
8: Attach to project and immediately set to no new task.
9: Shut down client
10: Add your app_info file
(Default Windows 7: C:\Program Data\BOINC\Data\projects\www.worldcommunitygrid.org)
(Default Windows XP: C:\Documents and Settings\All Users\Application Data\BOINC\Data\projects\www.worldcommunitygrid.org)
11: Restart Client
12: Allow new task
13: Reboot just because.


GPU User settings
(Please modify these to suit your system. If in doubt, ask.)

Replace the number in <count>.5</count> according to how many GPU WU you want to run at same time:

  • .5 for 2 GPU work units
  • .33 for 3 GPU work units
  • .25 for 4 GPU work units

1 / Desired GPU WU Total = count
(example: 1 / 4 GPU WU = 0.5 coprocessor count)

This applies to both single and multiple GPU setups.

CPU User settings
If you want to run more GPU work units than you have CPU cores, change the <avg_ncpus>1.0</avg_ncpus> line to specify how much of a thread to use on average:

Total CPU Threads / Total GPU WU Total = avg_ncpus
(example: Dual core CPU / 4 GPU WU = 0.5 avg_ncpus)

This applies to both single and multiple GPU setups.

Multiple card setup

If you're using mixed cards by default BOINC uses the best one, and in some cases may not use all your GPU even if they're the same. To use more than one GPU in the same machine go to the BOINC data folder (normally: C:\Program Data\BOINC) and look for file "cc_config.xml". If it doesn't exist, create it but the contents should have the following:


Templates
(Current HCC version: 7.05)
If you want to crunch all projects, you can use this link for reference information: http://www.xtremesystems.org/forums/showthread.php?283509-Working-app_info-files.
An example of this in use by Norton:

ATI/AMD GPU ONLY (No CPU work)

Code:

<app_info> <app> <name>hcc1</name> <user_friendly_name>Help Conquer Cancer</user_friendly_name> </app> <file_info> <name>wcg_hcc1_img_7.05_windows_intelx86__ati_hcc1</name> <executable/> </file_info> <file_info> <name>hcckernel.cl.7.05</name> <executable/> </file_info> <app_version> <app_name>hcc1</app_name> <version_num>705</version_num> <platform>windows_intelx86</platform> <plan_class>ati_hcc1</plan_class> <avg_ncpus>1.0</avg_ncpus> <max_ncpus>1.0</max_ncpus> <coproc> <type>ATI</type> <count>.5</count> </coproc> <file_ref> <file_name>wcg_hcc1_img_7.05_windows_intelx86__ati_hcc1</file_name> <main_program/> </file_ref> <file_ref> <file_name>hcckernel.cl.7.05</file_name> <open_name>hcckernel.cl</open_name> </file_ref> </app_version> </app_info>

ATI/AMD GPU and CPU HCC (HCC GPU and HCC CPU only)

Code:

<app_info> <app> <name>hcc1</name> <user_friendly_name>Help Conquer Cancer</user_friendly_name> </app> <file_info> <name>wcg_hcc1_img_7.05_windows_intelx86__ati_hcc1</name> <executable/> </file_info> <file_info> <name>hcckernel.cl.7.05</name> <executable/> </file_info> <file_info><name>wcg_hcc1_img_7.05_windows_intelx86</name><executable/> </file_info><app_version> <app_name>hcc1</app_name> <version_num>705</version_num> <platform>windows_intelx86</platform> <plan_class>ati_hcc1</plan_class> <avg_ncpus>1.0</avg_ncpus> <max_ncpus>1.0</max_ncpus> <coproc> <type>ATI</type> <count>.5</count> </coproc> <file_ref> <file_name>wcg_hcc1_img_7.05_windows_intelx86__ati_hcc1</file_name> <main_program/> </file_ref> <file_ref> <file_name>hcckernel.cl.7.05</file_name> <open_name>hcckernel.cl</open_name> </file_ref> </app_version><app_version><app_name>hcc1</app_name><version_num>705</version_num><platform>windows_intelx86</platform><avg_ncpus>1.000000</avg_ncpus><max_ncpus>1.000000</max_ncpus><api_version>6.13.0</api_version><file_ref><file_name>wcg_hcc1_img_7.05_windows_intelx86</file_name><main_program/></file_ref></app_version></app_info>

ATI/AMD GPU and GFAM (HCC GPU and GFAM CPU only)

Code:

<app_info> <app> <name>hcc1</name> <user_friendly_name>Help Conquer Cancer</user_friendly_name> </app> <file_info> <name>wcg_hcc1_img_7.05_windows_intelx86__ati_hcc1</name> <executable/> </file_info> <file_info> <name>hcckernel.cl.7.05</name> <executable/> </file_info> <app_version> <app_name>hcc1</app_name> <version_num>705</version_num> <platform>windows_intelx86</platform> <plan_class>ati_hcc1</plan_class> <avg_ncpus>1.0</avg_ncpus> <max_ncpus>1.0</max_ncpus> <coproc> <type>ATI</type> <count>.33</count> </coproc> <file_ref> <file_name>wcg_hcc1_img_7.05_windows_intelx86__ati_hcc1</file_name> <main_program/> </file_ref> <file_ref> <file_name>hcckernel.cl.7.05</file_name> <open_name>hcckernel.cl</open_name> </file_ref> </app_version><app> <name>gfam</name> <user_friendly_name>GO Fight Against Malaria</user_friendly_name></app> <file_info> <name>wcgrid_gfam_vina_6.12_windows_x86_64</name> <executable/> </file_info> <file_info> <name>wcgrid_gfam_vina_prod_x86_64.exe.6.12</name> <executable/> </file_info> <file_info> <name>wcgrid_gfam_gfx_prod_x86_64.exe.6.12</name> <executable/> </file_info><app_version> <app_name>gfam</app_name> <version_num>612</version_num> <platform>windows_x86_64</platform> <avg_ncpus>1.0</avg_ncpus> <max_ncpus>1.0</max_ncpus> <flops>3347548492.458962</flops> <api_version>7.1.0</api_version> <file_ref> <file_name>wcgrid_gfam_vina_6.12_windows_x86_64</file_name> <main_program/> </file_ref> <file_ref> <file_name>wcgrid_gfam_vina_prod_x86_64.exe.6.12</file_name> <open_name>AutoDockVina64.exe</open_name> </file_ref> <file_ref> <file_name>wcgrid_gfam_gfx_prod_x86_64.exe.6.12</file_name> <open_name>graphics_app</open_name> </file_ref></app_version><app_version> <app_name>gfam</app_name> <version_num>612</version_num> <platform>windows_intelx86</platform> <avg_ncpus>1.000000</avg_ncpus> <max_ncpus>1.000000</max_ncpus> <flops>3347548492.458962</flops> <api_version>7.1.0</api_version> <file_ref> <file_name>wcgrid_gfam_vina_6.12_windows_x86_64</file_name> <main_program/> </file_ref> <file_ref> <file_name>wcgrid_gfam_vina_prod_x86_64.exe.6.12</file_name> <open_name>AutoDockVina64.exe</open_name> </file_ref> <file_ref> <file_name>wcgrid_gfam_gfx_prod_x86_64.exe.6.12</file_name> <open_name>graphics_app</open_name> </file_ref></app_version></app_info>

NVIDIA GPU Only (No CPU work)

Code:

<app_info> <app> <name>hcc1</name> <user_friendly_name>Help Conquer Cancer</user_friendly_name> </app> <file_info> <name>wcg_hcc1_img_7.05_windows_intelx86__nvidia_hcc1</name> <executable/> </file_info> <file_info> <name>hcckernel.cl.7.05</name> <executable/> </file_info> <app_version> <app_name>hcc1</app_name> <version_num>705</version_num> <platform>windows_intelx86</platform> <plan_class>nvidia_hcc1</plan_class> <avg_ncpus>1.0</avg_ncpus> <max_ncpus>1.0</max_ncpus> <coproc> <type>CUDA</type> <count>.5</count> </coproc> <file_ref> <file_name>wcg_hcc1_img_7.05_windows_intelx86__nvidia_hcc1</file_name> <main_program/> </file_ref> <file_ref> <file_name>hcckernel.cl.7.05</file_name> <open_name>hcckernel.cl</open_name> </file_ref> </app_version></app_info>

NVIDIA GPU and CPU HCC (Both GPU and CPU Work units of HCC only)

Code:

<app_info> <app> <name>hcc1</name> <user_friendly_name>Help Conquer Cancer</user_friendly_name> </app> <file_info> <name>wcg_hcc1_img_7.05_windows_intelx86__nvidia_hcc1</name> <executable/> </file_info> <file_info> <name>hcckernel.cl.7.05</name> <executable/> </file_info> <file_info> <name>wcg_hcc1_img_7.05_windows_intelx86</name> <executable/> </file_info><app_version> <app_name>hcc1</app_name> <version_num>705</version_num> <platform>windows_intelx86</platform> <plan_class>nvidia_hcc1</plan_class> <avg_ncpus>1.0</avg_ncpus> <max_ncpus>1.0</max_ncpus> <coproc> <type>CUDA</type> <count>.5</count> </coproc> <file_ref> <file_name>wcg_hcc1_img_7.05_windows_intelx86__nvidia_hcc1</file_name> <main_program/> </file_ref> <file_ref> <file_name>hcckernel.cl.7.05</file_name> <open_name>hcckernel.cl</open_name> </file_ref></app_version><app_version> <app_name>hcc1</app_name> <version_num>705</version_num> <platform>windows_intelx86</platform> <avg_ncpus>1.000000</avg_ncpus> <max_ncpus>1.000000</max_ncpus> <file_ref> <file_name>wcg_hcc1_img_7.05_windows_intelx86</file_name> <main_program/> </file_ref></app_version></app_info>

Troubleshooting
Alternate method: If this setup doesn't work, an alternate method is described here: http://www.xtremesystems.org/forums/showthread.php?283512-How-To-run-multiple-BIONC-clients-on-one-machine-not-an-app_info-method

Driver crahses:
If you are having driver crahses, the following registry modification might prevent that:
(Source: Bun-Bun from XS)

Code:

Windows Registry Editor Version 5.00[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Watchdog]"DisableBugCheck"="1"[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Watchdog\Display]"EaRecovery"="0"

How many WU can my GPU handle?
This is hard to say. I am currently running 2 on my HD7770's. People with OC'd 6970's can do up to 6! Again, treat this as a test starting low.

Other useful tips:

ThE_MaD_ShOt said:

[Edited]Just be careful loading up the wu's. You only want to load the gpu to around 95%. If you load it to much you will start erroring out the wu's. Also make sure you have good case air flow as the card is going to steady be at 95 or so %. And as KieX stated use at your own risk.


ThE_MaD_ShOt said:

If you recieve this error under messages:

10/21/2012 10:19:43 PMWorld Community Grid[error] App version returned from anonymous platform project; ignoring

Simply ignore it.

If you receive this error:

10/21/2012 10:19:43 PMWorld Community Grid[error] No application found for task: windows_intelx86 640 ; discarding

It indicates that you have not spell the name of the application in the app_info.xml correct or the application is missing in the BOINC\Data\projects\www.worldcommunitygrid.org map, if that's the case you can hit 'Reset project' under 'Projects' in the BOINC manager.

Guide to running multiple GPU work units (2024)

FAQs

How does using multiple GPUs work? ›

This approach is often seen in gaming and professional workstations, where users demand the highest levels of rendering power. The idea is simple: divide the graphical workload between two GPUs, thus theoretically doubling the rendering capabilities.

Is multi-GPU better than single GPU for deep learning? ›

If there are hundreds of thousands of training images or categories, then a single GPU will not be able to handle those tasks alone. In this case, multiple GPUs can be used together to achieve higher performance than if only one GPU was used.

Can I use multiple GPUs for AI? ›

Model parallelism is a method you can use when your parameters are too large for your memory constraints. Using this method, you split your model training processes across multiple GPUs and perform each process in parallel (as illustrated in the image below) or in series.

Is multi-GPU inference faster? ›

In our experiments, we found out that multi-GPU serving can significantly enhance the inference throughput per GPU. Using tensor parallelism can increase the throughput per GPU by 57% for vLLM and 80% for TensorRT-LLM, while we also see impressive performance increase with latency.

Is there any benefit to running two GPUs? ›

There are several benefits to multi-GPU configurations: Dramatically increased frame rates and graphical capabilities for high-end gaming. Faster rendering and productivity in creative applications. More power for GPU compute tasks like machine learning.

Is SLI still a thing? ›

Scalable Link Interface (SLI) is the brand name for a now discontinued multi-GPU technology developed by Nvidia for linking two or more video cards together to produce a single output. SLI is a parallel processing algorithm for computer graphics, meant to increase the available processing power.

How many GPUs do I need for AI? ›

Also keep in mind that a single GPU like the NVIDIA RTX 3090 or A5000 can provide significant performance and may be enough for your application. Having 2, 3, or even 4 GPUs in a workstation can provide a surprising amount of compute capability and may be sufficient for even many large problems.

How many CUDA cores do I need for deep learning? ›

The higher the number of CUDA cores, the higher the GPU performance is. My laptop GPU, NVIDIA RTX 3050 Ti has 2560 CUDA cores and 80 Tensor cores. From my experience, I know that it is a powerful GPU for deep learning and machine learning tasks!

How much GPU is enough for deep learning? ›

While the number of GPUs for a deep learning workstation may change based on which you spring for, in general, trying to maximize the amount you can have connected to your deep learning model is ideal. Starting with at least four GPUs for deep learning is going to be your best bet.

Is multi-GPU still a thing? ›

So, while the days of having multiple GPUs for graphics rendering (we still do that for other types of jobs) or having two discrete chip packages on a single card are unlikely to ever return, the future looks more and more to be built from arrays of GPUs on a single die, acting as one.

Should I use GPU or CPU for inference? ›

While GPUs remain the preferred choice for training AI models and handling complex inference tasks, CPUs present a viable and cost-effective alternative for many inference applications.

What is the difference between training and inference GPU usage? ›

Training tasks require storing model parameters, gradients, and intermediate states, while in inference tasks, the input data is independent of each other, and the intermediate states do not need to be saved.

Do I need 2 GPUs for 2 monitors? ›

Not necessarily. Many modern computers come with integrated graphics that support dual monitors. However, if you have specific requirements or want to use high-resolution displays, a dedicated graphics card can offer better performance and more connectivity options.

Can you use nvidia and AMD GPU together? ›

You may have heard of dual GPU setups in the past, but combining AMD and Nvidia in the same system? Even if you're using two of the best graphics cards, that's always going to be tricky. However, it's now been done and benchmarked, and the results are shockingly good — with a few major caveats.

Can you run two RTX 3080? ›

Beautifully, yes. There are only two obstacles to achieving that. One is your budget and the other is your motherboard.

What to do if you have 2 GPUs? ›

Just insert both GPUs into 2 PCI-E slots, connect both GPUs via an SLI bridge, download necessary drivers, and you are good to go. To have a boost in gaming performance, make sure said game supports Multi-GPU usage.

Top Articles
Latest Posts
Article information

Author: Jerrold Considine

Last Updated:

Views: 6352

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Jerrold Considine

Birthday: 1993-11-03

Address: Suite 447 3463 Marybelle Circles, New Marlin, AL 20765

Phone: +5816749283868

Job: Sales Executive

Hobby: Air sports, Sand art, Electronics, LARPing, Baseball, Book restoration, Puzzles

Introduction: My name is Jerrold Considine, I am a combative, cheerful, encouraging, happy, enthusiastic, funny, kind person who loves writing and wants to share my knowledge and understanding with you.