Posted On: Dec 9, 2022

SV1, the on-demand state vector simulator on Amazon Braket, now supports the computation of gradients using the adjoint differentiation method, enabling customers to reduce runtime and save costs for their quantum machine learning and optimization workloads. With this launch, customers simulating variational quantum algorithms with a large number of parameters, such as the quantum approximate optimization algorithm (QAOA), can now seamlessly incorporate adjoint gradient computation either directly from the Braket Python SDK or API, or through PennyLane, an open-source software framework built for quantum differentiable programming.

When using classical simulation to compute gradients for variational quantum algorithms, customers look to use the adjoint differentiation method for its inherent efficiency, requiring only two circuit executions, irrespective of the number of parameters or qubit count. Alternative methods, such as the parameter-shift rule, require the number of circuit executions to scale linearly with the number of parameters. For example, computing gradients using the adjoint differentiation method for a 2-layer QAOA with four total parameters, unlocks 4x speedups in runtime and corresponding cost savings, compared to using the parameter shift method. To compute adjoint gradients on SV1, customers now simply specify the adjoint gradient result type along with the corresponding parameters and observables, and run their simulation as usual.

This capability is available starting today in all AWS Regions where Braket is available, at no additional cost. To get started with gradient computation on SV1, please see the following resources: Amazon Braket Developer Guide, example notebook exploring adjoint gradient computation on Braket, or this example notebook using the PennyLane integration. For more information about the AWS Free Tier for simulator usage on Braket, please visit our pricing page.