This is the kernel-module needed by the proprietary binary nvidia
driver.  You also need the nvidia-driver package from SlackBuilds.org.

To build the package for a kernel different from the running one,
start the script setting the KERNEL variable as in
    KERNEL=4.6.3 ./nvidia-kernel.SlackBuild

This script now includes the option to build the open kernel module
instead of the default proprietary module.  However, this version of
the open modules will not build on the 5.15.x kernel, so the script
forces the proprietary build.  If you are building in -current, the
script will default to open.  To build the proprietary modules, pass
OPEN=no to the script.

NOTE REGARDING PAHOLE: Starting with 595.71.05, the kernel build
system now requires pahole, which is in -current but not in 15.0.

NOTE REGARDING CURRENT AND DRM: Because of all the changes to the way
DRM is handled in the 6.12 and later kernels, there have been reports
of display issues.  To disable building nvidia-drm.ko, pass DRM=no to
the script.  See README.nvidia-drm for details and workarounds.

If DRM=yes, a default config file is placed at 
  /usr/share/X11/xorg.conf.d/10-nvidia.conf
to make sure that X loads the nvidia module.  If you need to make
changes, copy that file to /etc/X11/xorg.conf.d/ and edit the copy.
You do not need this file at all if you have a proper and complete
xorg.conf.

The xf86-video-nouveau-blacklist package from /extra is required.

After installation, you will need to reboot your computer for the
changes to take effect.

NOTES ON THE OPEN vs. PROPRIETARY KERNEL MODULE
  (taken from the Nvidia driver README.txt, section 45A)

"[Both] flavors support GPU architectures Turing, Ampere, Ada, and
Hopper.  Blackwell and later are only supported by the open kernel
modules.

"The following features will only work with the open kernel modules
flavor of the driver:

   o NVIDIA Confidential Computing

   o Magnum IO GPUDirect Storage (GDS)

   o Heterogeneous Memory Management (HMM)

   o CPU affinity for GPU fault handlers

   o DMABUF support for CUDA allocations

"We recommend the use of open kernel modules on all GPUs that support
it."
