Lines Matching refs:background

65     // Dimensions of foreground / background mask. Optimum value should take into account only
72 // Levels at which to compute foreground / background decision. Think of them as are deltas
116 // Whether to mirror the background or not. For ex, the Camera app
122 // coordinates, if we were to mirror the background
132 // Maximum distance (in standard deviations) for considering a pixel as background
140 // Width of foreground / background mask.
142 // Height of foreground / background mask.
154 // Mask value to start blending away from background
170 // Default rate at which to learn bg model from new background pixels
174 // Default rate at which to verify whether background is stable
176 // Default rate at which to verify whether background is stable
179 // Default 3x3 matrix, column major, for fitting background 1:1
206 "background"};
238 // Variance distance in luminance between current pixel and background model
243 // Sum of variance distances in chroma between current pixel and background
256 // current background model, in both luminance and in chroma (yuv space). Distance is
257 // measured in variances from the mean background value. For chroma, the distance is the sum
282 // Foreground/background mask decision shader. Decides whether a frame is in the foreground or
283 // the background using a hierarchical threshold on the distance. Binary foreground/background
296 // Decide whether pixel is foreground or background based on Y and UV
321 // To match the white balance of foreground and background, the average of R, G, B channel of
325 // tex_sampler_1: Mip-map for background (playback) video frame.
347 // foreground and background
351 // tex_sampler_2: Foreground/background mask.
396 // foreground or background.
400 // tex_sampler_2: Foreground/background mask.
422 // classified as foreground or background.
427 // tex_sampler_3: Foreground/background mask.
453 // Background verification shader. Skews the current background verification mask towards the
615 // Create initial background model values
626 // Get frames to store background model in
702 Frame background = pullInput("background");
721 updateBgScaling(video, background, mBackgroundFitModeChanged);
727 copyShaderProgram.process(background, mBgInput);
771 // In the learning verification stage, compute background masks and a weighted average
805 Frame[] subtractInputs = { video, background, mMask, mAutoWB };
811 // Compute mean and variance of the background
885 // Relearn background model
928 private void updateBgScaling(Frame video, Frame background, boolean fitModeChanged) {
930 float backgroundAspect = (float)background.getFormat().getWidth() / background.getFormat().getHeight();
941 // Foreground is wider than background, scale down
942 // background in X
946 // Foreground is taller than background, scale down
947 // background in Y
954 // Foreground is wider than background, crop
955 // background in Y
959 // Foreground is taller than background, crop
960 // background in X
974 if (mLogVerbose) Log.v(TAG, "Mirroring the background!");