summaryrefslogtreecommitdiffstats
path: root/src/go/collectors/go.d.plugin/modules/nvidia_smi/integrations/nvidia_gpu.md
blob: d1c88b2dba69a66d5457795e741d35c747274f89 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
<!--startmeta
custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/go/collectors/go.d.plugin/modules/nvidia_smi/README.md"
meta_yaml: "https://github.com/netdata/netdata/edit/master/src/go/collectors/go.d.plugin/modules/nvidia_smi/metadata.yaml"
sidebar_label: "Nvidia GPU"
learn_status: "Published"
learn_rel_path: "Collecting Metrics/Hardware Devices and Sensors"
most_popular: False
message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
endmeta-->

# Nvidia GPU


<img src="https://netdata.cloud/img/nvidia.svg" width="150"/>


Plugin: go.d.plugin
Module: nvidia_smi

<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />

## Overview

This collector monitors GPUs performance metrics using
the [nvidia-smi](https://developer.nvidia.com/nvidia-system-management-interface) CLI tool.

> **Warning**: under development, [loop mode](https://github.com/netdata/netdata/issues/14522) not implemented yet.




This collector is supported on all platforms.

This collector supports collecting metrics from multiple instances of this integration, including remote instances.


### Default Behavior

#### Auto-Detection

This integration doesn't support auto-detection.

#### Limits

The default configuration for this integration does not impose any limits on data collection.

#### Performance Impact

The default configuration for this integration is not expected to impose a significant performance impact on the system.


## Metrics

Metrics grouped by *scope*.

The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.



### Per gpu

These metrics refer to the GPU.

Labels:

| Label      | Description     |
|:-----------|:----------------|
| uuid | GPU id (e.g. 00000000:00:04.0) |
| product_name | GPU product name (e.g. NVIDIA A100-SXM4-40GB) |

Metrics:

| Metric | Dimensions | Unit | XML | CSV |
|:------|:----------|:----|:---:|:---:|
| nvidia_smi.gpu_pcie_bandwidth_usage | rx, tx | B/s | • |   |
| nvidia_smi.gpu_pcie_bandwidth_utilization | rx, tx | % | • |   |
| nvidia_smi.gpu_fan_speed_perc | fan_speed | % | • | • |
| nvidia_smi.gpu_utilization | gpu | % | • | • |
| nvidia_smi.gpu_memory_utilization | memory | % | • | • |
| nvidia_smi.gpu_decoder_utilization | decoder | % | • |   |
| nvidia_smi.gpu_encoder_utilization | encoder | % | • |   |
| nvidia_smi.gpu_frame_buffer_memory_usage | free, used, reserved | B | • | • |
| nvidia_smi.gpu_bar1_memory_usage | free, used | B | • |   |
| nvidia_smi.gpu_temperature | temperature | Celsius | • | • |
| nvidia_smi.gpu_voltage | voltage | V | • |   |
| nvidia_smi.gpu_clock_freq | graphics, video, sm, mem | MHz | • | • |
| nvidia_smi.gpu_power_draw | power_draw | Watts | • | • |
| nvidia_smi.gpu_performance_state | P0-P15 | state | • | • |
| nvidia_smi.gpu_mig_mode_current_status | enabled, disabled | status | • |   |
| nvidia_smi.gpu_mig_devices_count | mig | devices | • |   |

### Per mig

These metrics refer to the Multi-Instance GPU (MIG).

Labels:

| Label      | Description     |
|:-----------|:----------------|
| uuid | GPU id (e.g. 00000000:00:04.0) |
| product_name | GPU product name (e.g. NVIDIA A100-SXM4-40GB) |
| gpu_instance_id | GPU instance id (e.g. 1) |

Metrics:

| Metric | Dimensions | Unit | XML | CSV |
|:------|:----------|:----|:---:|:---:|
| nvidia_smi.gpu_mig_frame_buffer_memory_usage | free, used, reserved | B | • |   |
| nvidia_smi.gpu_mig_bar1_memory_usage | free, used | B | • |   |



## Alerts

There are no alerts configured by default for this integration.


## Setup

### Prerequisites

#### Enable in go.d.conf.

This collector is disabled by default. You need to explicitly enable it in the `go.d.conf` file.



### Configuration

#### File

The configuration file name for this integration is `go.d/nvidia_smi.conf`.


You can edit the configuration file using the `edit-config` script from the
Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).

```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
sudo ./edit-config go.d/nvidia_smi.conf
```
#### Options

The following options can be defined globally: update_every, autodetection_retry.


<details><summary>Config options</summary>

| Name | Description | Default | Required |
|:----|:-----------|:-------|:--------:|
| update_every | Data collection frequency. | 10 | no |
| autodetection_retry | Recheck interval in seconds. Zero means no recheck will be scheduled. | 0 | no |
| binary_path | Path to nvidia_smi binary. The default is "nvidia_smi" and the executable is looked for in the directories specified in the PATH environment variable. | nvidia_smi | no |
| timeout | nvidia_smi binary execution timeout. | 2 | no |
| use_csv_format | Used format when requesting GPU information. XML is used if set to 'no'. | no | no |

</details>

#### Examples

##### CSV format

Use CSV format when requesting GPU information.

<details><summary>Config</summary>

```yaml
jobs:
  - name: nvidia_smi
    use_csv_format: yes

```
</details>

##### Custom binary path

The executable is not in the directories specified in the PATH environment variable.

<details><summary>Config</summary>

```yaml
jobs:
  - name: nvidia_smi
    binary_path: /usr/local/sbin/nvidia_smi

```
</details>



## Troubleshooting

### Debug Mode

To troubleshoot issues with the `nvidia_smi` collector, run the `go.d.plugin` with the debug option enabled. The output
should give you clues as to why the collector isn't working.

- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
  your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.

  ```bash
  cd /usr/libexec/netdata/plugins.d/
  ```

- Switch to the `netdata` user.

  ```bash
  sudo -u netdata -s
  ```

- Run the `go.d.plugin` to debug the collector:

  ```bash
  ./go.d.plugin -d -m nvidia_smi
  ```