History log of /frameworks/base/services/java/com/android/server/accessibility/EventStreamTransformation.java
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
c4fccd183f1bb47a027bb303af5e65bec2f68b1b 09-Apr-2013 Svetoslav <svetoslavganov@google.com> Adding APIs for an accessibility service to intercept key events.

Now that we have gestures which are detected by the system and
interpreted by an accessibility service, there is an inconsistent
behavior between using the gestures and the keyboard. Some devices
have both. Therefore, an accessibility service should be able to
interpret keys in addition to gestures to provide consistent user
experience. Now an accessibility service can expose shortcuts for
each gestural action.

This change adds APIs for an accessibility service to observe and
intercept at will key events before they are dispatched to the
rest of the system. The service can return true or false from its
onKeyEvent to either consume the event or to let it be delivered
to the rest of the system. However, the service will *not* be
able to inject key events or modify the observed ones.

Previous ideas of allowing the service to say it "tracks" the event
so the latter is not delivered to the system until a subsequent
event is either "handled" or "not handled" will not work. If the
service tracks a key but no other key is pressed essentially this
key is not delivered to the app and at potentially much later point
this stashed event will be delivered in maybe a completely different
context.The correct way of implementing shortcuts is a combination
of modifier keys plus some other key/key sequence. Key events already
contain information about which modifier keys are down as well as
the service can track them as well.

bug:8088812

Change-Id: I81ba9a7de9f19ca6662661f27fdc852323e38c00
/frameworks/base/services/java/com/android/server/accessibility/EventStreamTransformation.java
45af84a483165f06c04d74baba67f90da29c6ad2 02-Oct-2012 Svetoslav Ganov <svetoslavganov@google.com> Touch explorer and magnifier do not work well together.

1. If tocuh exploration and screen magnification are enabled and the screen
is currently magnified, gesture detection does not work well. The reason
is because we are transforming the events if the screen is magnified before
passing them to the touch explorer to compensate for the magnification so
the user can poke what he thinks he pokes. However, when doing gesture
detection/velocity computing this compensating shrinks the gestured shape/
decreases velocity leading to poor gesture reco/incorrect velocity.

This change adds a onRawMotionEvent method in the event transformation chain
which will process the raw touch events. In this method of the touch explorer
we are passing events to the gesture recognized and the velocity tracker.

2. Velocity tracker was not cleared on transitions out of touch exploring state
which is the only one that uses velocity.

bug:7266617

Change-Id: I7887fe5f3c3bb6cfa203b7866a145c7341098a02
/frameworks/base/services/java/com/android/server/accessibility/EventStreamTransformation.java
1cf70bbf96930662cab0e699d70b62865766ff52 06-Aug-2012 Svetoslav Ganov <svetoslavganov@google.com> Screen magnification - feature - framework.

This change is the initial check in of the screen magnification
feature. This feature enables magnification of the screen via
global gestures (assuming it has been enabled from settings)
to allow a low vision user to efficiently use an Android device.

Interaction model:

1. Triple tap toggles permanent screen magnification which is magnifying
the area around the location of the triple tap. One can think of the
location of the triple tap as the center of the magnified viewport.
For example, a triple tap when not magnified would magnify the screen
and leave it in a magnified state. A triple tapping when magnified would
clear magnification and leave the screen in a not magnified state.

2. Triple tap and hold would magnify the screen if not magnified and enable
viewport dragging mode until the finger goes up. One can think of this
mode as a way to move the magnified viewport since the area around the
moving finger will be magnified to fit the screen. For example, if the
screen was not magnified and the user triple taps and holds the screen
would magnify and the viewport will follow the user's finger. When the
finger goes up the screen will clear zoom out. If the same user interaction
is performed when the screen is magnified, the viewport movement will
be the same but when the finger goes up the screen will stay magnified.
In other words, the initial magnified state is sticky.

3. Pinching with any number of additional fingers when viewport dragging
is enabled, i.e. the user triple tapped and holds, would adjust the
magnification scale which will become the current default magnification
scale. The next time the user magnifies the same magnification scale
would be used.

4. When in a permanent magnified state the user can use two or more fingers
to pan the viewport. Note that in this mode the content is panned as
opposed to the viewport dragging mode in which the viewport is moved.

5. When in a permanent magnified state the user can use three or more
fingers to change the magnification scale which will become the current
default magnification scale. The next time the user magnifies the same
magnification scale would be used.

6. The magnification scale will be persisted in settings and in the cloud.

Note: Since two fingers are used to pan the content in a permanently magnified
state no other two finger gestures in touch exploration or applications
will work unless the uses zooms out to normal state where all gestures
works as expected. This is an intentional tradeoff to allow efficient
panning since in a permanently magnified state this would be the dominant
action to be performed.

Design:

1. The window manager exposes APIs for setting accessibility transformation
which is a scale and offsets for X and Y axis. The window manager queries
the window policy for which windows will not be magnified. For example,
the IME windows and the navigation bar are not magnified including windows
that are attached to them.

2. The accessibility features such a screen magnification and touch
exploration are now impemented as a sequence of transformations on the
event stream. The accessibility manager service may request each
of these features or both. The behavior of the features is not changed
based on the fact that another one is enabled.

3. The screen magnifier keeps a viewport of the content that is magnified
which is surrounded by a glow in a magnified state. Interactions outside
of the viewport are delegated directly to the application without
interpretation. For example, a triple tap on the letter 'a' of the IME
would type three letters instead of toggling magnified state. The viewport
is updated on screen rotation and on window transitions. For example,
when the IME pops up the viewport shrinks.

4. The glow around the viewport is implemented as a special type of window
that does not take input focus, cannot be touched, is laid out in the
screen coordiates with width and height matching these of the screen.
When the magnified region changes the root view of the window draws the
hightlight but the size of the window does not change - unless a rotation
happens. All changes in the viewport size or showing or hiding it are
animated.

5. The viewport is encapsulated in a class that knows how to show,
hide, and resize the viewport - potentially animating that.
This class uses the new animation framework for animations.

6. The magnification is handled by a magnification controller that
keeps track of the current trnasformation to be applied to the screen
content and the desired such. If these two are not the same it is
responsibility of the magnification controller to reconcile them by
potentially animating the transition from one to the other.

7. A dipslay content observer wathces for winodw transitions, screen
rotations, and when a rectange on the screen has been reqeusted. This
class is responsible for handling interesting state changes such
as changing the viewport bounds on IME pop up or screen rotation,
panning the content to make a requested rectangle visible on the
screen, etc.

8. To implement viewport updates the window manger was updated with APIs
to watch for window transitions and when a rectangle has been requested
on the screen. These APIs are protected by a signature level permission.
Also a parcelable and poolable window info class has been added with
APIs for getting the window info given the window token. This enables
getting some useful information about a window. There APIs are also
signature protected.

bug:6795382

Change-Id: Iec93da8bf6376beebbd4f5167ab7723dc7d9bd00
/frameworks/base/services/java/com/android/server/accessibility/EventStreamTransformation.java