f66d51b5caa96995b91e7c155ff4378cdef4baaf |
|
06-May-2014 |
Prashanth B <beeps@google.com> |
[autotest] Split host acquisition and job scheduling. This is phase one of two in the plan to split host acquisition out of the scheduler's tick. The idea is to have the host scheduler use a job query manager to query the database for new jobs without hosts and assign hosts to them, while the main scheduler uses the same query managers to look for hostless jobs. Currently the main scheduler uses the class to acquire hosts inline, like it always has, and will continue to do so till the inline_host_acquisition feature flag is turned on via the shadow_config. TEST=Ran the scheduler, suites, unittets. BUG=chromium:344613 DEPLOY=Scheduler Change-Id: I542e4d1e509c16cac7354810416ee18ac940a7cf Reviewed-on: https://chromium-review.googlesource.com/199383 Reviewed-by: Prashanth B <beeps@chromium.org> Commit-Queue: Prashanth B <beeps@chromium.org> Tested-by: Prashanth B <beeps@chromium.org>
/external/autotest/scheduler/rdb_cache_unittests.py
|
86934c86a72509c6aede78591817edcedbddc268 |
|
09-May-2014 |
Prashanth B <beeps@google.com> |
[autotest] Calculate database cache staleness. TEST=Unittests, calculated staleness. BUG=None DEPLOY=Scheduler Change-Id: I82084c1d412a0a9bbda911159a156c5436e5e6c6 Reviewed-on: https://chromium-review.googlesource.com/199114 Commit-Queue: Prashanth B <beeps@chromium.org> Tested-by: Prashanth B <beeps@chromium.org> Reviewed-by: Dan Shi <dshi@chromium.org>
/external/autotest/scheduler/rdb_cache_unittests.py
|
2d8047e8b2d901bec66d483664d8b6322501d245 |
|
28-Apr-2014 |
Prashanth B <beeps@google.com> |
[autotest] In process request/host caching for the rdb. This cl implements an in process host cache manager for the rdb. The following considerations were taken into account while designing it: 1. The number of requests outweigh the number of leased hosts 2. The number of net hosts outweighs the number of leased hosts 3. The 'same' request can consult the cache within the span of a single batched request. These will only be same in terms of host labels/acls required, not in terms of priority or parent_job_id. Resulting ramifications: 1. We can't afford to consult the database for each request. 2. We can afford to refresh our in memory representation of a host just before leasing it. 3. Leasing a host can fail, as we might be using a stale cached host. 4. We can't load a map of all hosts <-> labels each request. 5. Invalidation is hard for most sane, straight-forward choices of keying hosts against requests. 6. Lower priority requests will starve if they try to lease the same hosts taken by higher priority requests. Main design tenets: 1. We can tolerate some staleness in the cache, since we're going to make sure the host is unleased just before using it. 2. If a job hits a stale cache line it tries again next tick. 3. Trying to invalidate the cache within a single batched request will be unnecessarily complicated and error prone. Instead, to prevent starvation, each request only invalidates its cache line, by removing the hosts it has just leased. 4. The same host may be preset in 2 different cache lines but this won't matter because each request will check the leased bit in real time before acquiring it. 5. The entire cache is invalidated at the end of a batched request. TEST=Ran suites, unittests. BUG=chromium:366141 DEPLOY=Scheduler Change-Id: Iafc3ffa876537da628c52260ae692bc2d5d3d063 Reviewed-on: https://chromium-review.googlesource.com/197788 Reviewed-by: Dan Shi <dshi@chromium.org> Tested-by: Prashanth B <beeps@chromium.org> Commit-Queue: Prashanth B <beeps@chromium.org>
/external/autotest/scheduler/rdb_cache_unittests.py
|