Torch – PyPI

<img src

=”https://warehouse-camo.ingress.cmh1.psfhosted.org/e245ad70761c480612df75f541ff9599a48d5c89/68747470733a2f2f6769746875622e636f6d2f7079746f7263682f7079746f7263682f626c6f622f6d61737465722f646f63732f736f757263652f5f7374617469632f696d672f7079746f7263682d6c6f676f2d6461726b2e706e67″ alt=”PyTorch logo” />

PyTorch is a Python package that provides two high-level features:

  • Tensor calculus (like NumPy) with strong
  • GPU acceleration Deep neural networks based on a tape-based autogradation system

You can reuse your favorite Python packages like NumPy, SciPy, and Cython to extend PyTorch when needed

.

Our trunk state (continuous integration signals) can be found in hud.pytorch.org.

  • Learn more about PyTorch
    • A GPU-ready tensor library
    • Dynamic neural networks: tape-based Python
    • Python First
    • Imperative Experiences

    • Fast, slender
    • painless
    • extensions
  • Installing
      • NVIDIA Jetson

      binaries Platforms

    • from the source
      • Prerequisites
      • Install dependencies Get the PyTorch source

      • Install PyTorch
        • Adjust build options (optional)
    • Docker image
      • Using predefined images Creating
      • the image yourself Creating the
    • documentation
    • Previous versions
  • Introduction
  • Resources
  • Communication
  • Communications and contribution
  • Team License
  • Learn more

about PyTorch

At the granular level, PyTorch is a library consisting of the following components: Description of the

Torch Component A Tensor library such as NumPy, with strong torch.autograd GPU support A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch torch.jit

A build stack (TorchScript) for creating serializable and optimizable models from PyTorch code torch.nn A deeply integrated neural network library with autograd designed for maximum flexibility torch.multiprocessing Python multiprocessing, but with magic memory shared from torch tensors across processes. Useful for data loading and training of Hogwild torch.utils DataLoader and other utility functions for convenience

Typically, PyTorch is used as:

  • A replacement for NumPy to use the power of GPUs
  • .

  • A deep learning research platform that provides maximum flexibility and speed.

Doing More:

A GPU-Ready

Tensor Library

If you use NumPy, then you have used tensors (also known as ndarray).

Tensor illustration

PyTorch provides tensors that can live on the CPU or GPU and accelerates the calculation by a huge amount.

We offer a wide variety of tensor routines to accelerate and meet your scientific computing needs, such as cutting, indexing, mathematical operations, linear algebra, reductions. And they’re fast!

Dynamic neural networks: Autograd based on tape

PyTorch has a unique way of building neural networks: use and play a recorder

.

Most frameworks like TensorFlow, Theano, Caffe, and CNTK have a static view of the world. One has to build a neural network and reuse the same structure over and over again. Changing the way the network behaves means that one has to start from scratch.

With PyTorch, we use a technique called reverse mode autodifferentiation, which allows you to change the way your network behaves arbitrarily without lag or overhead. Our inspiration comes from various research papers on this topic, as well as current and past works such as torch-autograd, autograd, Chainer, etc.

While this technique is not unique to PyTorch, it is one of the fastest implementations to date. You get the best of speed and flexibility for your crazy research.

Dynamic graph

Python

First

PyTorch is not a Python binding in a monolithic C++ framework. It is designed to integrate deeply into Python. You can use it naturally as you would with NumPy/SciPy/scikit-learn, etc. You can write your new neural network layers in Python, using your favorite libraries and use packages like Cython and Numba. Our goal is not to reinvent the wheel where appropriate.

Imperative Experiences

PyTorch is designed to be intuitive, linear in thought, and easy to use. When you run a line of code, it runs. There is no asynchronous view of the world. When placed in a debugger or receiving error messages and stack traces, understanding them is simple. Stack tracking points to exactly where the code was defined. We hope you never spend hours debugging your code due to incorrect stack traces or asynchronous and opaque execution engines.

Fast and slim PyTorch

has minimal frame overhead. We integrate acceleration libraries such as Intel MKL and NVIDIA (cuDNN, NCCL) to maximize speed. At the core, its Tensor CPU and GPU and neural network backends are mature and have been tested for years.

Therefore, PyTorch is quite fast, whether you run small or large neural networks.

The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives. We’ve written custom memory allocators for the GPU to make sure your deep learning models are as memory-efficient as possible. This allows you to train larger deep learning models than before.

Painless extensions

Writing new neural network modules, or interacting with PyTorch’s Tensor API was designed to be simple and with minimal abstractions

. You can write new

neural network layers in Python using the Torch API or your favorite NumPy-based libraries, such as SciPy. If you want to write your layers in

C

/C++, we provide a convenient extension API that is efficient and with minimal repetitiveness. You do not need to write any container code. You can see a tutorial here and an example here.

Installation binaries

Commands to install binaries via Conda or pip wheels are on our website: https://pytorch.org/get-started/locally/

NVIDIA

Jetson Platforms

Python

wheels for NVIDIA’s Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX and Jetson AGX Orin are provided here and the L4T container is posted here Require JetPack

4.2 and above, and @dusty-NV and @ptrblck maintain them.

From the

source

Prerequisites

If you are installing from source, you will need:

Python 3.8

  • or later (for Linux, Python 3.8.1+ is required)
  • A C++17 compatible compiler, such as clang

We strongly recommend installing an Anaconda environment. You’ll get a high-quality BLAS (MKL) library and get dependency-controlled versions regardless of your Linux distribution.

If you want to compile with CUDA support,

install the following (note that CUDA is not compatible with macOS)

  • NVIDIA CUDA 11.0
  • or higher NVIDIA cuDNN

  • v7 or higher
  • CUDA

  • compatible compiler

Note: You can refer to the cuDNN support matrix for cuDNN versions with the various CUDAs, CUDA driver, and hardware

NVIDIA supported If you want to disable CUDA support, export the environment variable USE_CUDA=0. Other potentially useful environment variables can be found in setup.py.

If

you are compiling for NVIDIA’s Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), instructions for installing PyTorch for Jetson Nano are available here If

you want to compile with

ROCm support, install AMD ROCm

  • 4.0 and higher ROCm installation
  • is currently only supported for Linux systems.

If you want to disable ROCm support, export the environment variable USE_ROCM=0. Other potentially useful environment variables can be found in setup.py.

Install

dependencies

Common Conda

Install cmake ninja # Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch source code” section under the pip install -r requirements.txt

On Linux

conda install mkl mkl-include #CUDA only: Add LAPACK support for the GPU if needed conda install -c pytorch magma-cuda110# or the magma-cuda* that matches your CUDA version of https://anaconda.org/pytorch/repo On

MacOS

# Add this package on machines with intel x86 processor only conda install mkl mkl-include # Add these packages if torch.distributed conda install pkg-config libuv

On Windows

conda install mkl mkl-include # Add these packages if torch.distributed is needed. # Support for distributed packages in Windows is a prototype feature and is subject to change. conda install -c conda-forge libuv=1.39

Get the PyTorch source code

git clone -recursive https://github.com/pytorch/pytorch pytorch cd # if you are updating an existing checkout git sync submodule git submodule update -init -recursive Install PyTorch

on Linux

If you are compiling for AMD ROCm, run this

command first: # Only run this if you are compiling for ROCm

python tools/amd_build/build_amd.py

Install PyTorch

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-“$(dirname $(which conda))/.. /”} python setup.py develop

Aside: If you are using Anaconda, you may experience an error caused by the linker:

This is caused by ld of the Conda environment that follows the ld system. You should use a newer version of Python that fixes this problem. The recommended version of Python is 3.8.1+.

In

macOS

python3 setup.py develop

On Windows

, choose Visual Studio

Correct Version

.

PyTorch CI uses Visual C++ BuildTools, which includes Visual Studio Enterprise, Professional, or Community Edition. You can also install the build tools from https://visualstudio.microsoft.com/visual-cpp-build-tools/. Build tools do not come with Visual Studio Code by default.

If you want to create legacy Python code, see Building on Legacy Code and

CPU-Only CUDA Builds

In this mode, PyTorch calculations will run on your CPU, not on your GPU

conda Enable python setup.py develop

Note on OpenMP: The desired implementation of OpenMP is Intel OpenMP (iomp). In order to link against iomp, you’ll need to manually download the library and configure the build environment by adjusting CMAKE_INCLUDE_PATH and LIB. The instruction here is an example for configuring MKL and Intel OpenMP. Without these settings for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.

Build based on

CUDA

In this mode, PyTorch calculations will leverage your GPU via CUDA for faster numerical processing

NVTX is needed to build Pytorch with CUDA. NVTX is a part of distributive CUDA, where it is called “Nsight Compute”. To install it on an already installed CUDA, run the CUDA installation once more and tick the corresponding checkbox. Ensure that CUDA with Nsight Compute is installed after Visual Studio.

Currently, VS 2017/2019, and Ninja are supported as the CMake generator. If ninja.exe is detected in PATH, then Ninja will be used as the default generator, otherwise it will use VS 2017/2019. If Ninja is selected as the generator, the last MSVC is selected as the underlying toolchain.

Additional libraries such as Magma, oneDNN, also known as MKLDNN or DNNL, and Sccache are often needed. See the installation assistant to install them.

You can check the build_pytorch.bat script for some other environment variable settings

cmd :: Set the environment variables after you have downloaded and unzipped the mkl package, :: otherwise, CMake would throw an error like ‘OpenMP could not be found’. set CMAKE_INCLUDE_PATH={Your directory}\mkl\include set LIB={Your directory}\mkl\lib;%LIB% :: Please read the contents of the previous section carefully before continuing. :: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, run the following script block. :: “Visual Studio 2019 Developer Command Prompt” will run automatically. :: Make sure you have CMake >= 3.12 before you do this when using the Visual Studio generator. set CMAKE_GENERATOR_TOOLSET_VERSION=14.27 set DISTUTILS_USE_SDK=1 for /f “usebackq tokens=*” %i in (‘”%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe” -version [15^,17^) -products * -latest -property installationPath’) do call “%i\VC\Auxiliary\Build\vcvarsall.bat” x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION% :: [Optional] If you want to override the CUDA host compiler, set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe python setup.py develop

Adjust build options (optional

)

You can adjust the settings of cmake variables optionally (without compiling first) by doing the following. For example, setting previously discovered directories for CuDNN or BLAS can be done with this step.

On Linux

, export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-“$(dirname $(which conda))/.. /”} python setup.py build -cmake-only ccmake build # or cmake-gui build

In macOS

export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-“$(dirname $(which conda))/.. /”} MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build -cmake-only ccmake build # o cmake-gui build Docker Image

Using precompiled images

You can also extract a prebuilt Docker image from Docker Hub and run it with docker v19.03+ docker run

-gpus all -rm -ti -ipc=host pytorch/pytorch:latest

Note that PyTorch uses shared memory to share data between processes, Therefore, if torch multiprocessing is used (for example, for multithreaded data loaders), The default size of the shared memory segment that the container runs on is not sufficient, and you must increase the size of the shared memory by using the -ipc=host or -shm-size command-line options to run nvidia-Docker.

Create the image yourself

NOTE: Must be compiled with a Docker version > 18.06

The Dockerfile is supplied to create images with CUDA 11.1 and cuDNN v8 support. You can pass PYTHON_VERSION=x.y make variable to specify which version of Python Miniconda should use, or leave it unconfigured to use the default.

Make -f docker. Makefile # images are labeled as docker.io/${your_docker_username}/pytorch Creating the documentation To create documentation

in various formats, you will need Sphinx and the readthedocs theme

.

CD Requirements

docs/ pip install -r.txt

You can then create the documentation by running make <format> from the docs/ folder. Run make to get a list of all available output formats.

If you get a katex error, run npm install katex. If it persists, try

npm install -g katex Note: If you installed nodejs with a

different package manager (e.g. conda), then npm will probably install a version of katex that is not compatible with your version of nodejs and doc builds will fail. One combination of versions that is known to work is node@6.13.1 and katex@0.13.18. To install the latter with npm you can run npm install -g katex@0.13.18

Previous

versions

Installation instructions and binaries from previous versions of PyTorch can be found on our website.

Getting Started

Three pointers to get started

:

  • Tutorials: Introduction to understanding and using
  • PyTorch

  • Examples: easy to understand PyTorch code across domains
  • The

  • glossary
  • API Reference

Resources

PyTorch.org PyTorch Tutorials PyTorch Examples PyTorch models Introduction to Deep Learning with Udacity’s PyTorch

  • Introduction to Machine Learning
  • with Udacity’s

  • PyTorch

  • Deep Neural Networks with
  • Coursera’s PyTorch Twitter

  • PyTorch
  • Blog

  • PyTorch

    YouTube Communication Forums

  • – Discuss implementations, research
  • , etc. https://discuss.pytorch.org

  • GitHub issues: bug reports, feature requests, installation issues, RFCs, thoughts, etc.
  • Slack: PyTorch

  • Slack hosts a core audience of moderated to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you’re a beginner looking for help, the main medium is the PyTorch forums. If you need a loose invitation, fill out this form
  • :

  • https://goo.gl/forms/PP1AGvNHpSaJP8to1 Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can register here
  • :

  • https://eepurl.com/cbG0rv Facebook page: Important announcements about
  • PyTorch. https://www.facebook.com/pytorch

  • For brand guidelines, please visit our website at pytorch.org
  • Releases

and contributions

PyTorch has a 90-day release cycle (major releases). Please let us know if you encounter an error when submitting a problem.

We appreciate all contributions. If you plan to contribute bug fixes, do so without further discussion.

If you plan to contribute new features, utility functions, or extensions to the kernel, first open an issue and discuss the feature with us. Sending a PR without discussion could end up resulting in a rejected PR because we could be taking the core in a different direction than you might know.

For more information on how to make a contribution to Pytorch, please see our Contribution page.

The Team

PyTorch is

a community-driven project with several skilled engineers and researchers contributing to it. PyTorch is

currently maintained by Adam Paszke, Sam Gross, Soumith Chintala and Gregory Chanan with significant contributions coming from hundreds of talented individuals in various forms and mediums. A non-exhaustive but growing list should mention: Trevor Killeen, Sasank Chilamkurthy, Sergey Zagoruyko, Adam Lerer, Francisco Massa, Alykhan Tejani, Luca Antiga, Alban Desmaison, Andreas Koepf, James Bradbury, Zeming Lin, Yuandong Tian, Guillaume Lample, Marat Dukhan, Natalia Gimelshein, Christian Sarofeen, Martin Raison, Edward Yang, Zachary Devito.

Note: This project is not related to hughperkins/pytorch with the same name. Hugh is a valued contributor to the Torch community and has helped with many things about Torch and PyTorch.

License

PyTorch has a BSD-style license, as found in the LICENSE file.