The following driver options are supported by the NVIDIA X driver. They may be specified either in the Screen or Device sections of the X config file.
X Config Options
Option "NvAGP" "integer"
Configure AGP support. Integer argument can be one of:
|1||use NVIDIA internal AGP support, if possible|
|2||use AGPGART, if possible|
|3||use any AGP support (try AGPGART, then NVIDIA AGP)|
Note that NVIDIA internal AGP support cannot work if AGPGART is either statically compiled into your kernel or is built as a module and loaded into your kernel. See Chapter 12, Configuring AGP for details. Default: 3.
Option "NoLogo" "boolean"
Disable drawing of the NVIDIA logo splash screen at X startup. Default: the logo is drawn for screens with depth 24.
Option "LogoPath" "string"
Sets the path to the PNG file to be used as the logo splash screen at X startup. If the PNG file specified has a bKGD (background color) chunk, then the screen is cleared to the color it specifies. Otherwise, the screen is cleared to black. The logo file must be owned by root and must not be writable by a non-root group. Note that a logo is only displayed for screens with depth 24. Default: The built-in NVIDIA logo is used.
Option "RenderAccel" "boolean"
Enable or disable hardware acceleration of the RENDER extension. Default: hardware acceleration of the RENDER extension is enabled.
Option "NoRenderExtension" "boolean"
Disable the RENDER extension. Other than recompiling it, the X server does not seem to have another way of disabling this. Fortunately, we can control this from the driver so we export this option. This is useful in depth 8 where RENDER would normally steal most of the default colormap. Default: RENDER is offered when possible.
Option "UBB" "boolean"
Enable or disable the Unified Back Buffer on Quadro-based GPUs (Quadro4 NVS excluded); see Chapter 20, Configuring Flipping and UBB for a description of UBB. This option has no effect on non-Quadro GPU products. Default: UBB is on for Quadro GPUs.
Option "NoFlip" "boolean"
Disable OpenGL flipping; see Chapter 20, Configuring Flipping and UBB for a description. Default: OpenGL will swap by flipping when possible.
Option "Dac8Bit" "boolean"
Most Quadro products by default use a 10-bit color look-up table (LUT); setting this option to TRUE forces these GPUs to use an 8-bit (LUT). Default: a 10-bit LUT is used, when available.
Option "Overlay" "boolean"
Enables RGB workstation overlay visuals. This is only supported on Quadro GPUs (Quadro NVS GPUs excluded) in depth 24. This option causes the server to advertise the SERVER_OVERLAY_VISUALS root window property and GLX will report single- and double-buffered, Z-buffered 16-bit overlay visuals. The transparency key is pixel 0x0000 (hex). There is no gamma correction support in the overlay plane. This feature requires XFree86 version 4.2.0 or newer, or the X.Org X server. When the X screen is either wider than 2046 pixels or taller than 2047, the overlay may be emulated with a substantial performance penalty. RGB workstation overlays are not supported when the Composite extension is enabled.
UBB must be enabled when overlays are enabled (this is the default behavior).
Option "CIOverlay" "boolean"
Enables Color Index workstation overlay visuals with identical restrictions to Option "Overlay" above. This option causes the server to advertise the SERVER_OVERLAY_VISUALS root window property. Some of the visuals advertised that way may be listed in the main plane (layer 0) for compatibility purposes. They however belong to the overlay (layer 1). The server will offer visuals both with and without a transparency key. These are depth 8 PseudoColor visuals. Enabling Color Index overlays on X servers older than XFree86 4.3 will force the RENDER extension to be disabled due to bugs in the RENDER extension in older X servers. Color Index workstation overlays are not supported when the Composite extension is enabled. Default: off.
UBB must be enabled when overlays are enabled (this is the default behavior).
Option "TransparentIndex" "integer"
When color index overlays are enabled, use this option to choose which pixel is used for the transparent pixel in visuals featuring transparent pixels. This value is clamped between 0 and 255 (Note: some applications such as Alias's Maya require this to be zero in order to work correctly). Default: 0.
Option "OverlayDefaultVisual" "boolean"
When overlays are used, this option sets the default visual to an overlay visual thereby putting the root window in the overlay. This option is not recommended for RGB overlays. Default: off.
Option "EmulatedOverlaysTimerMs" "integer"
Enables the use of a timer within the X server to perform the updates to the emulated overlay or CI overlay. This option can be used to improve the performance of the emulated or CI overlays by reducing the frequency of the updates. The value specified indicates the desired number of milliseconds between overlay updates. To disable the use of the timer either leave the option unset or set it to 0. Default: off.
Option "EmulatedOverlaysThreshold" "boolean"
Enables the use of a threshold within the X server to perform the updates to the emulated overlay or CI overlay. The emulated or CI overlay updates can be deferred but this threshold will limit the number of deferred OpenGL updates allowed before the overlay is updated. This option can be used to trade off performance and animation quality. Default: on.
Option "EmulatedOverlaysThresholdValue" "integer"
Controls the threshold used in updating the emulated or CI overlays. This is used in conjunction with the EmulatedOverlaysThreshold option to trade off performance and animation quality. Higher values for this option favor performance over quality. Setting low values of this option will not cause the overlay to be updated more often than the frequence specified by the EmulatedOverlaysTimerMs option. Default: 5.
Option "RandRRotation" "boolean"
Enable rotation support for the XRandR extension. This allows use of the XRandR X server extension for configuring the screen orientation through rotation. This feature is supported using depth 24. This requires an X.Org X 6.8.1 or newer X server. This feature does not work with hardware overlays; emulated overlays will be used instead at a substantial performance penalty. See Chapter 17, Using the XRandR Extension for details. Default: off.
Option "Rotate" "string"
Enable static rotation support. Unlike the RandRRotation option above, this option takes effect as soon as the X server is started and will work with older versions of X. This feature is supported using depth 24. This feature does not work with hardware overlays; emulated overlays will be used instead at a substantial performance penalty. This option is not compatible with the RandR extension. Valid rotations are "normal", "left", "inverted", and "right". Default: off.
Option "SWCursor" "boolean"
Enable or disable software rendering of the X cursor. Default: off.
Option "HWCursor" "boolean"
Enable or disable hardware rendering of the X cursor. Default: on.
Option "CursorShadow" "boolean"
Enable or disable use of a shadow with the hardware accelerated cursor; this is a black translucent replica of your cursor shape at a given offset from the real cursor. Default: off (no cursor shadow).
Option "CursorShadowAlpha" "integer"
The alpha value to use for the cursor shadow; only applicable if CursorShadow is enabled. This value must be in the range [0, 255] -- 0 is completely transparent; 255 is completely opaque. Default: 64.
Option "CursorShadowXOffset" "integer"
The offset, in pixels, that the shadow image will be shifted to the right from the real cursor image; only applicable if CursorShadow is enabled. This value must be in the range [0, 32]. Default: 4.
Option "CursorShadowYOffset" "integer"
The offset, in pixels, that the shadow image will be shifted down from the real cursor image; only applicable if CursorShadow is enabled. This value must be in the range [0, 32]. Default: 2.
Option "ConnectedMonitor" "string"
Allows you to override what the NVIDIA kernel module detects is connected to your graphics card. This may be useful, for example, if you use a KVM (keyboard, video, mouse) switch and you are switched away when X is started. In such a situation, the NVIDIA kernel module cannot detect which display devices are connected, and the NVIDIA X driver assumes you have a single CRT.
Valid values for this option are "CRT" (cathode ray tube), "DFP" (digital flat panel), or "TV" (television); if using TwinView, this option may be a comma-separated list of display devices; e.g.: "CRT, CRT" or "CRT, DFP".
It is generally recommended to not use this option, but instead use the "UseDisplayDevice" option.
NOTE: anything attached to a 15 pin VGA connector is regarded by the driver as a CRT. "DFP" should only be used to refer to digital flat panels connected via a DVI port.
Default: string is NULL (the NVIDIA driver will detect the connected display devices).
Option "UseDisplayDevice" "string"
The "UseDisplayDevice" X configuration option is a list of one or more display devices, which limits the display devices the NVIDIA X driver will consider for an X screen. The display device names used in the option may be either specific (with a numeric suffix; e.g., "DFP-1") or general (without a numeric suffix; e.g., "DFP").
When assigning display devices to X screens, the NVIDIA X driver walks through the list of all (not already assigned) display devices detected as connected. When the "UseDisplayDevice" X configuration option is specified, the X driver will only consider connected display devices which are also included in the "UseDisplayDevice" list. This can be thought of as a "mask" against the connected (and not already assigned) display devices.
Note the subtle difference between this option and the "ConnectedMonitor" option: the "ConnectedMonitor" option overrides which display devices are actually detected, while the "UseDisplayDevice" option controls which of the detected display devices will be used on this X screen.
Of the list of display devices considered for this X screen (either all connected display devices, or a subset limited by the "UseDisplayDevice" option), the NVIDIA X driver first looks at CRTs, then at DFPs, and finally at TVs. For example, if both a CRT and a DFP are connected, by default the X driver would assign the CRT to this X screen. However, by specifying:
Option "UseDisplayDevice" "DFP"
the X screen would use the DFP instead. Or, if CRT-0, DFP-0, and DFP-1 are connected and TwinView is enabled, the X driver would assign CRT-0 and DFP-0 to the X screen. However, by specifying:
Option "UseDisplayDevice" "CRT-0, DFP-1"
the X screen would use CRT-0 and DFP-1 instead.
Additionally, the special value "none" can be specified for the "UseDisplayDevice" option. When this value is given, any programming of the display hardware is disabled. The NVIDIA driver will not perform any mode validation or mode setting for this X screen. This is intended for use in conjunction with CUDA or in remote graphics solutions such as VNC or Hewlett Packard's Remote Graphics Software (RGS). This functionality is only available on Quadro and Tesla GPUs.
Note the following restrictions for setting the "UseDisplayDevice" to "none":
OpenGL SyncToVBlank will have no effect.
None of Stereo, Overlay, CIOverlay, or SLI are allowed when "UseDisplayDevice" is set to "none".
Option "UseEdidFreqs" "boolean"
This option controls whether the NVIDIA X driver will use the HorizSync and VertRefresh ranges given in a display device's EDID, if any. When UseEdidFreqs is set to True, EDID-provided range information will override the HorizSync and VertRefresh ranges specified in the Monitor section. If a display device does not provide an EDID, or the EDID does not specify an hsync or vrefresh range, then the X server will default to the HorizSync and VertRefresh ranges specified in the Monitor section of your X config file. These frequency ranges are used when validating modes for your display device.
Default: True (EDID frequencies will be used)
Option "UseEDID" "boolean"
By default, the NVIDIA X driver makes use of a display device's EDID, when available, during construction of its mode pool. The EDID is used as a source for possible modes, for valid frequency ranges, and for collecting data on the physical dimensions of the display device for computing the DPI (see Appendix E, Dots Per Inch). However, if you wish to disable the driver's use of the EDID, you can set this option to False:
Option "UseEDID" "FALSE"
Note that, rather than globally disable all uses of the EDID, you can individually disable each particular use of the EDID; e.g.,
Option "UseEDIDFreqs" "FALSE" Option "UseEDIDDpi" "FALSE" Option "ModeValidation" "NoEdidModes"
Default: True (use EDID).
Option "UseInt10Module" "boolean"
Enable use of the X Int10 module to soft-boot all secondary cards, rather than POSTing the cards through the NVIDIA kernel module. Default: off (POSTing is done through the NVIDIA kernel module).
Option "TwinView" "boolean"
Enable or disable TwinView. See Chapter 13, Configuring TwinView for details. Default: off (TwinView is disabled).
Option "TwinViewOrientation" "string"
Controls the relationship between the two display devices when using TwinView. Takes one of the following values: "RightOf" "LeftOf" "Above" "Below" "Clone". See Chapter 13, Configuring TwinView for details. Default: string is NULL.
Option "SecondMonitorHorizSync" "range(s)"
This option is like the HorizSync entry in the Monitor section, but is for the second monitor when using TwinView. See Chapter 13, Configuring TwinView for details. Default: none.
Option "SecondMonitorVertRefresh" "range(s)"
This option is like the VertRefresh entry in the Monitor section, but is for the second monitor when using TwinView. See Chapter 13, Configuring TwinView for details. Default: none.
Option "MetaModes" "string"
This option describes the combination of modes to use on each monitor when using TwinView. See Chapter 13, Configuring TwinView for details. Default: string is NULL.
Option "NoTwinViewXineramaInfo" "boolean"
When in TwinView, the NVIDIA X driver normally provides a Xinerama extension that X clients (such as window managers) can use to discover the current TwinView configuration, such as where each display device is positioned within the X screen. Some window mangers get confused by this information, so this option is provided to disable this behavior. Default: false (TwinView Xinerama information is provided).
Due to bugs in some older software, TwinView Xinerama information is not provided by default on X.Org 7.1 and older when the X server is started with only one display device connected.
Option "TwinViewXineramaInfoOrder" "string"
When the NVIDIA X driver provides TwinViewXineramaInfo (see the NoTwinViewXineramaInfo X config option), it by default reports the currently enabled display devices in the order "CRT, DFP, TV". The TwinViewXineramaInfoOrder X config option can be used to override this order.
The option string is a comma-separated list of display device names. The display device names can either be general (e.g, "CRT", which identifies all CRTs), or specific (e.g., "CRT-1", which identifies a particular CRT). Not all display devices need to be identified in the option string; display devices that are not listed will be implicitly appended to the end of the list, in their default order.
Note that TwinViewXineramaInfoOrder tracks all display devices that could possibly be connected to the GPU, not just the ones that are currently enabled. When reporting the Xinerama information, the NVIDIA X driver walks through the display devices in the order specified, only reporting enabled display devices.
"DFP" "TV, DFP" "DFP-1, DFP-0, TV, CRT"
In the first example, any enabled DFPs would be reported first (any enabled CRTs or TVs would be reported afterwards). In the second example, any enabled TVs would be reported first, then any enabled DFPs (any enabled CRTs would be reported last). In the last example, if DFP-1 were enabled, it would be reported first, then DFP-0, then any enabled TVs, and then any enabled CRTs; finally, any other enabled DFPs would be reported.
Default: "CRT, DFP, TV"
Option "TwinViewXineramaInfoOverride" "string"
This option overrides the values reported by NVIDIA's TwinView Xinerama implementation. This disregards the actual display devices used by the X screen and any order specified in TwinViewXineramaInfoOrder.
The option string is interpreted as a comma-separated list of regions, specified as '[width]x[height]+[x-offset]+[y-offset]'. The regions' sizes and offsets are not validated against the X screen size, but are directly reported to any Xinerama client.
"1600x1200+0+0, 1600x1200+1600+0" "1024x768+0+0, 1024x768+1024+0, 1024x768+0+768, 1024x768+1024+768"
Option "TVStandard" "string"
See Chapter 16, Configuring TV-Out for details on configuring TV-out.
Option "TVOutFormat" "string"
See Chapter 16, Configuring TV-Out for details on configuring TV-out.
Option "TVOverScan" "Decimal value in the range 0.0 to 1.0"
Valid values are in the range 0.0 through 1.0; See Chapter 16, Configuring TV-Out for details on configuring TV-out.
Option "Stereo" "integer"
Enable offering of quad-buffered stereo visuals on Quadro. Integer indicates the type of stereo equipment being used:
|1||DDC glasses. The sync signal is sent to the glasses via the DDC signal to the monitor. These usually involve a passthrough cable between the monitor and the graphics card. This mode is not available on G8xGL and higher GPUs.|
|2||"Blueline" glasses. These usually involve a passthrough cable between the monitor and graphics card. The glasses know which eye to display based on the length of a blue line visible at the bottom of the screen. When in this mode, the root window dimensions are one pixel shorter in the Y dimension than requested. This mode does not work with virtual root window sizes larger than the visible root window size (desktop panning). This mode is not available on G8xGL and higher GPUs.|
|3||Onboard stereo support. This is usually only found on professional cards. The glasses connect via a DIN connector on the back of the graphics card.|
|4||TwinView clone mode stereo (also known as "passive" stereo). On graphics cards that support TwinView, the left eye is displayed on the first display, and the right eye is displayed on the second display. This is normally used in conjunction with special projectors to produce 2 polarized images which are then viewed with polarized glasses. To use this stereo mode, you must also configure TwinView in clone mode with the same resolution, panning offset, and panning domains on each display.|
|5||Vertical interlaced stereo mode, for use with SeeReal Stereo Digital Flat Panels.|
|6||Color interleaved stereo mode, for use with Sharp3D Stereo Digital Flat Panels.|
|7||Horizontal interlaced stereo mode, for use with Arisawa, Hyundai, Zalman, Pavione, and Miracube Digital Flat Panels.|
|8||Checkerboard pattern stereo mode, for use with 3D DLP Display Devices.|
|9||Inverse checkerboard pattern stereo mode, for use with 3D DLP Display Devices.|
|10||NVIDIA 3D Vision mode for use with NVIDIA 3D Vision glasses. The NVIDIA 3D Vision infrared emitter must be connected to a USB port of your computer, and to the 3-pin DIN connector of a Quadro graphics board (based on G8xGL or higher GPU) before starting the X server. Hot-plugging the USB infrared stereo emitter is not yet supported. Also, 3D Vision Stereo Linux support requires a Linux kernel built with USB device filesystem (usbfs) and USB 2.0 support. Not presently supported on FreeBSD or Solaris.|
Stereo is only available on Quadro cards. Stereo options 1, 2, 3 and 10 (also known as "active" stereo) may be used with TwinView if all modes within each MetaMode have identical timing values. See Chapter 19, Programming Modes for suggestions on making sure the modes within your MetaModes are identical. The identical ModeLine requirement is not necessary for Stereo options 4 through 9 ("passive" stereo). Default: 0 (Stereo is not enabled).
The following table summarize the available stereo modes, their supported GPUs, and their intended display devices.
|Stereo mode (value)||Graphics card supported*||Display supported|
|DDC glasses (1)||Quadro graphics cards with pre-G8xGL GPUs||CRTs with high refresh rate#|
|Blueline glasses (2)||Quadro graphics cards with pre-G8xGL GPUs||CRTs with high refresh rate#|
|Onboard DIN (3)||Quadro graphics cards||Displays with high refresh rate#|
|TwinView clone (4)||Quadro graphics cards||Projectors with polarization|
|Vertical Interlaced (5)||Quadro graphics cards||SeeReal Stereo DFP|
|Color Interleaved (6)||Quadro graphics cards||Sharp3D stereo DFP|
|Horizontal Interlaced (7)||Quadro graphics cards^||Arisawa, Hyundai, Zalman, Pavione, and Miracube|
|Checkerboard Pattern (8)||Quadro graphics cards^||3D DLP display devices|
|Inverse Checkerboard (9)||Quadro graphics cards^||3D DLP display devices|
|NVIDIA 3D Vision (10)||Quadro graphics cards with G8xGL or higher GPU^||Supported 3D Vision ready displays^^|
|* Quadro graphics cards excluding Quadro NVS cards.|
|# High refresh rate mean refresh rate > 80Hz.|
UBB must be enabled when stereo is enabled (this is the default behavior).
Stereo options 1, 2, and 3 ("active" stereo) can be enabled on digital display devices (connected via DVI, HDMI, or DisplayPort). However, some digital display devices might not behave as desired with active stereo:
Some digital display devices may not be able to toggle pixel colors quickly enough when flipping between eyes on every vblank.
Some digital display devices may have an optical polarization that interferes with stereo goggles.
Active stereo requires high refresh rates, because a vertical refresh is needed to display each eye. Some digital display devices have a low refresh rate, which will result in flickering when used for active stereo.
Some digital display devices might internally convert from other refresh rates to their native refresh rate (e.g., 60Hz), resulting in incompatible rates between the stereo glasses and stereo displayed on screen.
Stereo applies to an entire X screen, so it will apply to all display devices on that X screen, whether or not they all support the selected Stereo mode.
Stereo options 7, 8, 9, and 10 are only supported on G8xGL and higher GPUs.
Multi-GPU cards (such as the Quadro FX 4500 X2) provide a single DIN connector for onboard stereo support (option 3) and NVIDIA 3D Vision stereo (option 10), which is tied to the bottommost GPU. In order to synchronize stereo with the other GPU, you must use a G-Sync device (see Chapter 26, Configuring Frame Lock and Genlock for details).
Option "ForceStereoFlipping" "boolean"
Stereo flipping is the process by which left and right eyes are displayed on alternating vertical refreshes. Normally, stereo flipping is only performed when a stereo drawable is visible. This option forces stereo flipping even when no stereo drawables are visible.
This is to be used in conjunction with the "Stereo" option. If "Stereo" is 0, the "ForceStereoFlipping" option has no effect. If otherwise, the "ForceStereoFlipping" option will force the behavior indicated by the "Stereo" option, even if no stereo drawables are visible. This option is useful in a multiple-screen environment in which a stereo application is run on a different screen than the stereo master.
|0||Stereo flipping is not forced. The default behavior as indicated by the "Stereo" option is used.|
|1||Stereo flipping is forced. Stereo is running even if no stereo drawables are visible. The stereo mode depends on the value of the "Stereo" option.|
Default: 0 (Stereo flipping is not forced). Note that active stereo is not supported on digital flat panels.
Option "XineramaStereoFlipping" "boolean"
By default, when using Stereo with Xinerama, all physical X screens having a visible stereo drawable will stereo flip. Use this option to allow only one physical X screen to stereo flip at a time.
This is to be used in conjunction with the "Stereo" and "Xinerama" options. If "Stereo" is 0 or "Xinerama" is 0, the "XineramaStereoFlipping" option has no effect.
If you wish to have all X screens stereo flip all the time, see the "ForceStereoFlipping" option.
|0||Stereo flipping is enabled on one X screen at a time. Stereo is enabled on the first X screen having the stereo drawable.|
|1||Stereo flipping in enabled on all X screens.|
Default: 1 (Stereo flipping is enabled on all X screens).
Option "NoBandWidthTest" "boolean"
As part of mode validation, the X driver tests if a given mode fits within the hardware's memory bandwidth constraints. This option disables this test. Default: false (the memory bandwidth test is performed).
Option "IgnoreDisplayDevices" "string"
This option tells the NVIDIA kernel module to completely ignore the indicated classes of display devices when checking which display devices are connected. You may specify a comma-separated list containing any of "CRT", "DFP", and "TV". For example:
Option "IgnoreDisplayDevices" "DFP, TV"
will cause the NVIDIA driver to not attempt to detect if any digital flat panels or TVs are connected. This option is not normally necessary; however, some video BIOSes contain incorrect information about which display devices may be connected, or which i2c port should be used for detection. These errors can cause long delays in starting X. If you are experiencing such delays, you may be able to avoid this by telling the NVIDIA driver to ignore display devices which you know are not connected. NOTE: anything attached to a 15 pin VGA connector is regarded by the driver as a CRT. "DFP" should only be used to refer to digital flat panels connected via a DVI port.
Option "MultisampleCompatibility" "boolean"
Enable or disable the use of separate front and back multisample buffers. Enabling this will consume more memory but is necessary for correct output when rendering to both the front and back buffers of a multisample or FSAA drawable. This option is necessary for correct operation of SoftImage XSI. Default: false (a single multisample buffer is shared between the front and back buffers).
Option "NoPowerConnectorCheck" "boolean"
The NVIDIA X driver will abort X server initialization if it detects that a GPU that requires an external power connector does not have an external power connector plugged in. This option can be used to bypass this test. Default: false (the power connector test is performed).
Option "XvmcUsesTextures" "boolean"
Forces XvMC to use the 3D engine for XvMCPutSurface requests rather than the video overlay. Default: false (video overlay is used when available).
Option "AllowGLXWithComposite" "boolean"
Enables GLX even when the Composite X extension is loaded. ENABLE AT YOUR OWN RISK. OpenGL applications will not display correctly in many circumstances with this setting enabled.
This option is intended for use on X.Org X servers older than X11R6.9.0. On X11R6.9.0 or newer X servers, the NVIDIA OpenGL implementation interacts properly by default with the Composite X extension and this option should not be needed. However, on X11R6.9.0 or newer X servers, support for GLX with Composite can be disabled by setting this option to False.
Default: false (GLX is disabled when Composite is enabled on X servers older than X11R6.9.0).
Option "UseCompositeWrapper" "boolean"
Enables the X server's "composite wrapper", which performs coordinate translations necessary for the Composite extension.
Default: false (the NVIDIA X driver performs its own coordinate translation).
Option "AddARGBGLXVisuals" "boolean"
Adds a 32-bit ARGB visual for each supported OpenGL configuration. This allows applications to use OpenGL to render with alpha transparency into 32-bit windows and pixmaps. This option requires the Composite extension. Default: ARGB GLX visuals are enabled on X servers new enough to support them when the Composite extension is also enabled and the screen depth is 24 or 30.
Option "DisableGLXRootClipping" "boolean"
If enabled, no clipping will be performed on rendering done by OpenGL in the root window. This option is deprecated. It is needed by older versions of OpenGL-based composite managers that draw the contents of redirected windows directly into the root window using OpenGL. Most OpenGL-based composite managers have been updated to support the Composite Overlay Window, a feature introduced in Xorg release 7.1. Using the Composite Overlay Window is the preferred method for performing OpenGL-based compositing.
Option "DamageEvents" "boolean"
Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited. This will significantly improve performance and interactivity when using GLX applications with a composite manager running. It will also affect applications using GLX when rotation is enabled. This option is currently incompatible with SLI and Multi-GPU modes and will be disabled if either are used. Enabled by default.
Option "ExactModeTimingsDVI" "boolean"
Forces the initialization of the X server with the exact timings specified in the ModeLine. Default: false (for DVI devices, the X server initializes with the closest mode in the EDID list).
Option "Coolbits" "integer"
Enables various unsupported features, such as support for GPU clock manipulation in the NV-CONTROL X extension. This option accepts a bit mask of features to enable.
WARNING: this may cause system damage and void warranties. This utility can run your computer system out of the manufacturer's design specifications, including, but not limited to: higher system voltages, above normal temperatures, excessive frequencies, and changes to BIOS that may corrupt the BIOS. Your computer's operating system may hang and result in data loss or corrupted images. Depending on the manufacturer of your computer system, the computer system, hardware and software warranties may be voided, and you may not receive any further manufacturer support. NVIDIA does not provide customer service support for the Coolbits option. It is for these reasons that absolutely no warranty or guarantee is either express or implied. Before enabling and using, you should determine the suitability of the utility for your intended use, and you shall assume all responsibility in connection therewith.
When "1" (Bit 0) is set in the "Coolbits" option value, the nvidia-settings utility will contain a page labeled "Clock Frequencies" through which clock settings can be manipulated. "Coolbits" is only available on GeForce FX, Quadro FX and newer desktop GPUs. On GeForce FX and newer mobile GPUs, limited clock manipulation support is available when "1" is set in the "Coolbits" option value: clocks can be lowered relative to the default settings; overclocking is not supported due to the thermal constraints of notebook designs.
When "2" (Bit 1) is set in the "Coolbits" option value, the NVIDIA driver will attempt to initialize SLI when using GPUs with different amounts of video memory.
When "4" (Bit 2) is set in the "Coolbits" option value, the nvidia-settings Thermal Monitor page will allow configuration of GPU fan speed, on graphics boards with programmable fan capability.
The default for this option is 0 (unsupported features are disabled).
Option "MultiGPU" "string"
This option controls the configuration of Multi-GPU rendering in supported configurations.
|0, no, off, false, Single||Use only a single GPU when rendering|
|1, yes, on, true, Auto||Enable Multi-GPU and allow the driver to automatically select the appropriate rendering mode.|
|AFR||Enable Multi-GPU and use the Alternate Frame Rendering mode.|
|SFR||Enable Multi-GPU and use the Split Frame Rendering mode.|
|AA||Enable Multi-GPU and use antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.|
Option "SLI" "string"
This option controls the configuration of SLI rendering in supported configurations.
|0, no, off, false, Single||Use only a single GPU when rendering|
|1, yes, on, true, Auto||Enable SLI and allow the driver to automatically select the appropriate rendering mode.|
|AFR||Enable SLI and use the Alternate Frame Rendering mode.|
|SFR||Enable SLI and use the Split Frame Rendering mode.|
|AA||Enable SLI and use SLI Antialiasing. Use this in conjunction with full scene antialiasing to improve visual quality.|
|AFRofAA||Enable SLI and use SLI Alternate Frame Rendering of Antialiasing mode. Use this in conjunction with full scene antialiasing to improve visual quality. This option is only valid for SLI configurations with 4 GPUs.|
Option "TripleBuffer" "boolean"
Enable or disable the use of triple buffering. If this option is enabled, OpenGL windows that sync to vblank and are double-buffered will be given a third buffer. This decreases the time an application stalls while waiting for vblank events, but increases latency slightly (delay between user input and displayed result).
Option "DPI" "string"
This option specifies the Dots Per Inch for the X screen; for example:
Option "DPI" "75 x 85"
will set the horizontal DPI to 75 and the vertical DPI to 85. By default, the X driver will compute the DPI of the X screen from the EDID of any connected display devices. See Appendix E, Dots Per Inch for details. Default: string is NULL (disabled).
Option "UseEdidDpi" "string"
By default, the NVIDIA X driver computes the DPI of an X screen based on the physical size of the display device, as reported in the EDID, and the size in pixels of the first mode to be used on the display device. If multiple display devices are used by the X screen, then the NVIDIA X screen will choose which display device to use. This option can be used to specify which display device to use. The string argument can be a display device name, such as:
Option "UseEdidDpi" "DFP-0"
or the argument can be "FALSE" to disable use of EDID-based DPI calculations:
Option "UseEdidDpi" "FALSE"
See Appendix E, Dots Per Inch for details. Default: string is NULL (the driver computes the DPI from the EDID of a display device and selects the display device).
Option "ConstantDPI" "boolean"
By default on X.Org 6.9 or newer X servers, the NVIDIA X driver recomputes the size in millimeters of the X screen whenever the size in pixels of the X screen is changed using XRandR, such that the DPI remains constant.
This behavior can be disabled (which means that the size in millimeters will not change when the size in pixels of the X screen changes) by setting the "ConstantDPI" option to "FALSE"; e.g.,
Option "ConstantDPI" "FALSE"
ConstantDPI defaults to True.
On X servers older than X.Org 6.9, the NVIDIA X driver cannot change the size in millimeters of the X screen. Therefore the DPI of the X screen will change when XRandR changes the size in pixels of the X screen. The driver will behave as if ConstantDPI was forced to FALSE.
Option "CustomEDID" "string"
This option forces the X driver to use the EDID specified in a file rather than the display's EDID. You may specify a semicolon separated list of display names and filename pairs. Valid display device names include "CRT-0", "CRT-1", "DFP-0", "DFP-1", "TV-0", "TV-1", or one of the generic names "CRT", "DFP", "TV", which apply the EDID to all devices of the specified type. Additionally, if SLI Mosaic is enabled, this name can be prefixed by a GPU name (e.g., "GPU-0.CRT-0"). The file contains a raw EDID (e.g., a file generated by nvidia-settings).
Option "CustomEDID" "CRT-0:/tmp/edid1.bin; DFP-0:/tmp/edid2.bin"
will assign the EDID from the file /tmp/edid1.bin to the display device CRT-0, and the EDID from the file /tmp/edid2.bin to the display device DFP-0. Note that a display device name must always be specified even if only one EDID is specified.
Caution: Specifying an EDID that doesn't exactly match your display may damage your hardware, as it allows the driver to specify timings beyond the capabilities of your display. Use with care.
Option "IgnoreEDIDChecksum" "string"
This option forces the X driver to accept an EDID even if the checksum is invalid. You may specify a comma separated list of display names. Valid display device names include "CRT-0", "CRT-1", "DFP-0", "DFP-1", "TV-0", "TV-1", or one of the generic names "CRT", "DFP", "TV", which ignore the EDID checksum on all devices of the specified type. Additionally, if SLI Mosaic is enabled, this name can be prefixed by a GPU name (e.g., "GPU-0.CRT-0").
Option "IgnoreEDIDChecksum" "CRT, DFP-0"
will cause the nvidia driver to ignore the EDID checksum for all CRT monitors and the displays DFP-0 and TV-0.
Caution: An invalid EDID checksum may indicate a corrupt EDID. A corrupt EDID may have mode timings beyond the capabilities of your display, and using it could damage your hardware. Use with care.
Option "ModeValidation" "string"
This option provides fine-grained control over each stage of the mode validation pipeline, disabling individual mode validation checks. This option should only very rarely be used.
The option string is a semicolon-separated list of comma-separated lists of mode validation arguments. Each list of mode validation arguments can optionally be prepended with a display device name.
"<dpy-0>: <tok>, <tok>; <dpy-1>: <tok>, <tok>, <tok>; ..."
"AllowNon60HzDFPModes": some lower quality TMDS encoders are only rated to drive DFPs at 60Hz; the driver will determine when only 60Hz DFP modes are allowed. This argument disables this stage of the mode validation pipeline.
"NoMaxPClkCheck": each mode has a pixel clock; this pixel clock is validated against the maximum pixel clock of the hardware (for a DFP, this is the maximum pixel clock of the TMDS encoder, for a CRT, this is the maximum pixel clock of the DAC). This argument disables the maximum pixel clock checking stage of the mode validation pipeline.
"NoEdidMaxPClkCheck": a display device's EDID can specify the maximum pixel clock that the display device supports; a mode's pixel clock is validated against this pixel clock maximum. This argument disables this stage of the mode validation pipeline.
"AllowInterlacedModes": interlaced modes are not supported on all NVIDIA GPUs; the driver will discard interlaced modes on GPUs where interlaced modes are not supported; this argument disables this stage of the mode validation pipeline.
"NoMaxSizeCheck": each NVIDIA GPU has a maximum resolution that it can drive; this argument disables this stage of the mode validation pipeline.
"NoHorizSyncCheck": a mode's horizontal sync is validated against the range of valid horizontal sync values; this argument disables this stage of the mode validation pipeline.
"NoVertRefreshCheck": a mode's vertical refresh rate is validated against the range of valid vertical refresh rate values; this argument disables this stage of the mode validation pipeline.
"NoWidthAlignmentCheck": the alignment of a mode's visible width is validated against the capabilities of the GPU; normally, a mode's visible width must be a multiple of 8. This argument disables this stage of the mode validation pipeline.
"NoDFPNativeResolutionCheck": when validating for a DFP, a mode's size is validated against the native resolution of the DFP; this argument disables this stage of the mode validation pipeline.
"NoVirtualSizeCheck": if the X configuration file requests a specific virtual screen size, a mode cannot be larger than that virtual size; this argument disables this stage of the mode validation pipeline.
"NoVesaModes": when constructing the mode pool for a display device, the X driver uses a built-in list of VESA modes as one of the mode sources; this argument disables use of these built-in VESA modes.
"NoEdidModes": when constructing the mode pool for a display device, the X driver uses any modes listed in the display device's EDID as one of the mode sources; this argument disables use of EDID-specified modes.
"NoXServerModes": when constructing the mode pool for a display device, the X driver uses the built-in modes provided by the core XFree86/Xorg X server as one of the mode sources; this argument disables use of these modes. Note that this argument does not disable custom ModeLines specified in the X config file; see the "NoCustomModes" argument for that.
"NoCustomModes": when constructing the mode pool for a display device, the X driver uses custom ModeLines specified in the X config file (through the "Mode" or "ModeLine" entries in the Monitor Section) as one of the mode sources; this argument disables use of these modes.
"NoPredefinedModes": when constructing the mode pool for a display device, the X driver uses additional modes predefined by the NVIDIA X driver; this argument disables use of these modes.
"NoUserModes": additional modes can be added to the mode pool dynamically, using the NV-CONTROL X extension; this argument prohibits user-specified modes via the NV-CONTROL X extension.
"NoExtendedGpuCapabilitiesCheck": allow mode timings that may exceed the GPU's extended capability checks.
"ObeyEdidContradictions": an EDID may contradict itself by listing a mode as supported, but the mode may exceed an EDID-specified valid frequency range (HorizSync, VertRefresh, or maximum pixel clock). Normally, the NVIDIA X driver prints a warning in this scenario, but does not invalidate an EDID-specified mode just because it exceeds an EDID-specified valid frequency range. However, the "ObeyEdidContradictions" argument instructs the NVIDIA X driver to invalidate these modes.
"NoTotalSizeCheck": allow modes in which the individual visible or sync pulse timings exceed the total raster size.
"DoubleScanPriority": on GPUs older than G80, doublescan modes are sorted before non-doublescan modes of the same resolution for purposes of mode pool sorting; but on G80 and later GPUs, doublescan modes are sorted after non-doublescan modes of the same resolution. This token inverts that priority (i.e., doublescan modes will be sorted after on pre-G80 GPUs, and sorted before on G80 and later GPUs).
"NoDualLinkDVICheck": for mode timings used on dual link DVI DFPs, the driver must perform additional checks to ensure that the correct pixels are sent on the correct link. For some of these checks, the driver will invalidate the mode timings; for other checks, the driver will implicitly modify the mode timings to meet the GPU's dual link DVI requirements. This token disables this dual link DVI checking.
"NoDisplayPortBandwidthCheck": for mode timings used on DisplayPort devices, the driver must verify that the DisplayPort link can be configured to carry enough bandwidth to support a given mode's pixel clock. For example, some DisplayPort-to-VGA adapters only support 2 DisplayPort lanes, limiting the resolutions they can display. This token disables this DisplayPort bandwidth check.
Option "ModeValidation" "NoMaxPClkCheck"
disable the maximum pixel clock check when validating modes on all display devices.
Option "ModeValidation" "CRT-0: NoEdidModes, NoMaxPClkCheck; DFP-0: NoVesaModes"
do not use EDID modes and do not perform the maximum pixel clock check on CRT-0, and do not use VESA modes on DFP-0.
Option "ModeDebug" "boolean"
This option causes the X driver to print verbose details about mode validation to the X log file. Note that this option is applied globally: setting this option to TRUE will enable verbose mode validation logging for all NVIDIA X screens in the X server.
Option "UseEvents" "boolean"
Enables the use of system events in some cases when the X driver is waiting for the hardware. The X driver can briefly spin through a tight loop when waiting for the hardware. With this option the X driver instead sets an event handler and waits for the hardware through the poll() system call. Default: the use of the events is disabled.
Option "FlatPanelProperties" "string"
This option requests particular properties for all or a subset of the connected flat panels.
The option string is a semicolon-separated list of comma-separated property=value pairs. Each list of property=value pairs can optionally be prepended with a flat panel name.
"<DFP-0>: <property=value>, <property=value>; <DFP-1>: <property=value>; ..."
"Scaling": controls the flat panel scaling mode; possible values are: 'Default' (the driver will use whichever scaling state is current), 'Native' (the driver will use the flat panel's scaler, if possible), 'Scaled' (the driver will use the NVIDIA GPU's scaler, if possible), 'Centered' (the driver will center the image, if possible), and 'aspect-scaled' (the X driver will scale with the NVIDIA GPU's scaler, but keep the aspect ratio correct).
"Dithering": controls the flat panel dithering mode; possible values are: 'Default' (the driver will decide when to dither), 'Enabled' (the driver will always dither, if possible), and 'Disabled' (the driver will never dither).
Option "FlatPanelProperties" "Scaling = Centered"
set the flat panel scaling mode to centered on all flat panels.
Option "FlatPanelProperties" "DFP-0: Scaling = Centered; DFP-1: Scaling = Scaled, Dithering = Enabled"
set DFP-0's scaling mode to centered, set DFP-1's scaling mode to scaled and its dithering mode to enabled.
Option "ProbeAllGpus" "boolean"
When the NVIDIA X driver initializes, it probes all GPUs in the system, even if no X screens are configured on them. This is done so that the X driver can report information about all the system's GPUs through the NV-CONTROL X extension. This option can be set to FALSE to disable this behavior, such that only GPUs with X screens configured on them will be probed. Default: all GPUs in the system are probed.
Option "DynamicTwinView" "boolean"
Enable or disable support for dynamically configuring TwinView on this X screen. When DynamicTwinView is enabled (the default), the refresh rate of a mode (reported through XF86VidMode or XRandR) does not correctly report the refresh rate, but instead is a unique number such that each MetaMode has a different value. This is to guarantee that MetaModes can be uniquely identified by XRandR.
When DynamicTwinView is disabled, the refresh rate reported through XRandR will be accurate, but NV-CONTROL clients such as nvidia-settings will not be able to dynamically manipulate the X screen's MetaModes. TwinView can still be configured from the X config file when DynamicTwinView is disabled.
Default: DynamicTwinView is enabled.
Option "IncludeImplicitMetaModes" "boolean"
When the X server starts, a mode pool is created per display device, containing all the mode timings that the NVIDIA X driver determined to be valid for the display device. However, the only MetaModes that are made available to the X server are the ones explicitly requested in the X configuration file.
It is convenient for fullscreen applications to be able to change between the modes in the mode pool, even if a given target mode was not explicitly requested in the X configuration file.
To facilitate this, the NVIDIA X driver will, if only one display device is in use when the X server starts, implicitly add MetaModes for all modes in the display device's mode pool. This makes all the modes in the mode pool available to full screen applications that use the XF86VidMode or XRandR X extensions.
To prevent this behavior, and only add MetaModes that are explicitly requested in the X configuration file, set this option to FALSE.
Default: IncludeImplicitMetaModes is enabled.
Option "IndirectMemoryAccess" "boolean"
Some graphics cards have more video memory than can be mapped at once by the CPU (generally at most 256 MB of video memory can be CPU-mapped). On graphics cards based on G80 and higher, this option allows the driver to:
place more pixmaps in video memory, which will improve hardware rendering performance but may slow down software rendering;
allocate buffers larger than 256 MB, which is necessary to reach the maximum buffer size on newer GPUs.
On some systems, up to 3 gigabytes of virtual address space may be reserved in the X server for indirect memory access. This virtual memory does not consume any physical resources. Note that the amount of reserved memory may be limited on 32-bit platforms, so some problems with large buffer allocations can be resolved by switching to a 64-bit operating system.
Default: on (indirect memory access will be used, when available).
Option "OnDemandVBlankInterrupts" "boolean"
Normally, VBlank interrupts are generated on every vertical refresh of every display device connected to the GPU(s) installed in a given system. This experimental option enables on-demand VBlank control, allowing the driver to enable VBlank interrupt generation only when it is required. This can help conserve power.
Default: off (on-demand VBlank control is disabled).
Option "PixmapCacheSize" "size"
This option controls how much video memory is reserved for
pixmap allocations. When the option is specified,
size specifies the number of bytes to use
for the pixmap cache. Reserving this memory improves performance
when pixmaps are created and destroyed rapidly, but prevents this
memory from being used by OpenGL. When this cache is disabled or
space in the cache is exhausted, the driver will still allocate
pixmaps in video memory but pixmap creation and deletion
performance will not be improved.
NOTE: This option is deprecated in favor of the PixmapCacheRoundSizeKB nvidia-settings attribute and will be removed in a future driver release.
"1048576" will allocate one megabyte for the pixmap
Default: off (no memory is reserved specifically for pixmaps).
Option "AllowSHMPixmaps" "boolean"
This option controls whether applications can use the MIT-SHM X extension to create pixmaps whose contents are shared between the X server and the client. These pixmaps prevent the NVIDIA driver from performing a number of optimizations and degrade performance in many circumstances.
Disabling this option disables only shared memory pixmaps. Applications can still use the MIT-SHM extension to transfer data to the X server through shared memory using XShmPutImage.
Default: off (shared memory pixmaps are not allowed).
Option "InitializeWindowBackingPixmaps" "boolean"
This option controls whether the NVIDIA X Driver initializes newly created redirected windows using the contents of their parent window if the X server doesn't do it. Leaving redirected windows uninitialized may cause new windows to flash with black or random colors when some compositing managers are running.
This option will have no effect on X servers that already initialize redirected window contents. In most distributions, the X server is patched to skip that initialization. In this case, it is recommended to leave this option on for a better user experience.
Default: on (redirected windows are initialized).
Option "AllowUnofficialGLXProtocol" "boolean"
By default, the NVIDIA GLX implementation will not expose GLX protocol for GL commands if the protocol is not considered complete. Protocol could be considered incomplete for a number of reasons. The implementation could still be under development and contain known bugs, or the protocol specification itself could be under development or going through review. If users would like to test the server-side portion of such protocol when using indirect rendering, they can enable this option. If any X screen enables this option, it will enable protocol on all screens in the server.
When an NVIDIA GLX client is used, the related environment variable __GL_ALLOW_UNOFFICIAL_PROTOCOL will need to be set as well to enable support in the client.
Option "PanAllDisplays" "boolean"
When this option is enabled, all displays in the current MetaMode will pan as the pointer is moved. If disabled, only the displays whose panning domain contains the pointer (at its new location) are panned.
Default: enabled (all displays are panned when the pointer is moved).
Option "GvoDataFormat" "string"
This option controls the initial configuration of SDI (GVO) device's output data format.
Option "GvoSyncMode" "string"
This option controls the initial synchronization mode of the SDI (GVO) device.
|FreeRunning||The SDI output will be synchronized with the timing chosen from the SDI signal format list.|
|GenLock||SDI output will be synchronized with the external sync signal (if present/detected) with pixel accuracy.|
|FrameLock||SDI output will be synchronized with the external sync signal (if present/detected) with frame accuracy.|
Default: FreeRunning (Will not lock to an input signal).
Option "GvoSyncSource" "string"
This option controls the initial synchronization source (type) of the SDI (GVO) device. Note that the GvoSyncMode should be set to either GenLock or FrameLock for this option to take effect.
|Composite||Interpret sync source as composite.|
|SDI||Interpret sync source as SDI.|
Option "ConnectToAcpid" "boolean"
The ACPI daemon (acpid) receives information about ACPI events like AC/Battery power, docking, etc. acpid will deliver these events to the NVIDIA X driver via a UNIX domain socket connection. By default, the NVIDIA X driver will attempt to connect to acpid to receive these events. Set this option to "off" to prevent the NVIDIA X driver from connecting to acpid. Default: on (the NVIDIA X driver will attempt to connect to acpid).
Option "AcpidSocketPath" "string"
The NVIDIA X driver attempts to connect to the ACPI daemon (acpid) via a UNIX domain socket. The default path to this socket is "/var/run/acpid.socket". Set this option to specify an alternate path to acpid's socket. Default: "/var/run/acpid.socket".
Option "EnableACPIHotkeys" "boolean"
The NVIDIA Linux X driver can detect mobile display change hotkey events either through ACPI or by periodically checking the GPU hardware state.
While checking the GPU hardware state is generally sufficient to detect display change hotkey events, ACPI hotkey event delivery is preferable. However, X servers prior to X.Org xserver-1.2.0 have a bug that cause the X server to crash when the X server receives an ACPI hotkey event (freedesktop.org bug 8776). The NVIDIA Linux X driver will key off the X server ABI version to determine if the X server in use has this bug (X servers with ABI 1.1 or later do not).
Since some X servers may have an earlier ABI but have a patch to fix the bug, the "EnableACPIHotkeys" option can be specified to override the NVIDIA X driver's default decision to enable or disable ACPI display change hotkey events.
When running on a mobile system, search for "ACPI display change hotkey events" in your X log to see the NVIDIA X driver's decision.
Default: the NVIDIA X driver will decide whether to enable ACPI display change hotkey events based on the X server ABI.
Option "EnableACPIBrightnessHotkeys" "boolean"
Enable or disable handling of ACPI brightness change hotkey events. Default: enabled