The android.hardware.camera2 package provides an interface to individual camera devices connected to an Android device. It replaces the deprecated {@link android.hardware.Camera} class.

This package models a camera device as a pipeline, which takes in input requests for capturing a single frame, captures the single image per the request, and then outputs one capture result metadata packet, plus a set of output image buffers for the request. The requests are processed in-order, and multiple requests can be in flight at once. Since the camera device is a pipeline with multiple stages, having multiple requests in flight is required to maintain full framerate on most Android devices.

To enumerate, query, and open available camera devices, obtain a {@link android.hardware.camera2.CameraManager} instance.

Individual {@link android.hardware.camera2.CameraDevice CameraDevices} provide a set of static property information that describes the hardware device and the available settings and output parameters for the device. This information is provided through the {@link android.hardware.camera2.CameraCharacteristics} object, and is available through {@link android.hardware.camera2.CameraManager#getCameraCharacteristics}

To capture or stream images from a camera device, the application must first create a {@link android.hardware.camera2.CameraCaptureSession camera capture session} with a set of output Surfaces for use with the camera device, with {@link android.hardware.camera2.CameraDevice#createCaptureSession}. Each Surface has to be pre-configured with an {@link android.hardware.camera2.params.StreamConfigurationMap appropriate size and format} (if applicable) to match the sizes and formats available from the camera device. A target Surface can be obtained from a variety of classes, including {@link android.view.SurfaceView}, {@link android.graphics.SurfaceTexture} via {@link android.view.Surface#Surface(SurfaceTexture)}, {@link android.media.MediaCodec}, {@link android.media.MediaRecorder}, {@link android.renderscript.Allocation}, and {@link android.media.ImageReader}.

Generally, camera preview images are sent to {@link android.view.SurfaceView} or {@link android.view.TextureView} (via its {@link android.graphics.SurfaceTexture}). Capture of JPEG images or RAW buffers for {@link android.hardware.camera2.DngCreator} can be done with {@link android.media.ImageReader} with the {@link android.graphics.ImageFormat#JPEG} and {@link android.graphics.ImageFormat#RAW_SENSOR} formats. Application-driven processing of camera data in RenderScript, OpenGL ES, or directly in managed or native code is best done through {@link android.renderscript.Allocation} with a YUV {@link android.renderscript.Type}, {@link android.graphics.SurfaceTexture}, and {@link android.media.ImageReader} with a {@link android.graphics.ImageFormat#YUV_420_888} format, respectively.

The application then needs to construct a {@link android.hardware.camera2.CaptureRequest}, which defines all the capture parameters needed by a camera device to capture a single image. The request also lists which of the configured output Surfaces should be used as targets for this capture. The CameraDevice has a {@link android.hardware.camera2.CameraDevice#createCaptureRequest factory method} for creating a {@link android.hardware.camera2.CaptureRequest.Builder request builder} for a given use case, which is optimized for the Android device the application is running on.

Once the request has been set up, it can be handed to the active capture session either for a one-shot {@link android.hardware.camera2.CameraCaptureSession#capture capture} or for an endlessly {@link android.hardware.camera2.CameraCaptureSession#setRepeatingRequest repeating} use. Both methods also have a variant that accepts a list of requests to use as a burst capture / repeating burst. Repeating requests have a lower priority than captures, so a request submitted through capture() while there's a repeating request configured will be captured before any new instances of the currently repeating (burst) capture will begin capture.

After processing a request, the camera device will produce a {@link android.hardware.camera2.TotalCaptureResult} object, which contains information about the state of the camera device at time of capture, and the final settings used. These may vary somewhat from the request, if rounding or resolving contradictory parameters was necessary. The camera device will also send a frame of image data into each of the output {@code Surfaces} included in the request. These are produced asynchronously relative to the output CaptureResult, sometimes substantially later.