ThreadPoolExecutor.java revision a807b4d808d2591894daf13aab179b2e9c46a2f5
1/* 2 * Written by Doug Lea with assistance from members of JCP JSR-166 3 * Expert Group and released to the public domain, as explained at 4 * http://creativecommons.org/publicdomain/zero/1.0/ 5 */ 6 7package java.util.concurrent; 8import java.util.concurrent.locks.*; 9import java.util.concurrent.atomic.*; 10import java.util.*; 11 12// BEGIN android-note 13// removed security manager docs 14// END android-note 15 16/** 17 * An {@link ExecutorService} that executes each submitted task using 18 * one of possibly several pooled threads, normally configured 19 * using {@link Executors} factory methods. 20 * 21 * <p>Thread pools address two different problems: they usually 22 * provide improved performance when executing large numbers of 23 * asynchronous tasks, due to reduced per-task invocation overhead, 24 * and they provide a means of bounding and managing the resources, 25 * including threads, consumed when executing a collection of tasks. 26 * Each {@code ThreadPoolExecutor} also maintains some basic 27 * statistics, such as the number of completed tasks. 28 * 29 * <p>To be useful across a wide range of contexts, this class 30 * provides many adjustable parameters and extensibility 31 * hooks. However, programmers are urged to use the more convenient 32 * {@link Executors} factory methods {@link 33 * Executors#newCachedThreadPool} (unbounded thread pool, with 34 * automatic thread reclamation), {@link Executors#newFixedThreadPool} 35 * (fixed size thread pool) and {@link 36 * Executors#newSingleThreadExecutor} (single background thread), that 37 * preconfigure settings for the most common usage 38 * scenarios. Otherwise, use the following guide when manually 39 * configuring and tuning this class: 40 * 41 * <dl> 42 * 43 * <dt>Core and maximum pool sizes</dt> 44 * 45 * <dd>A {@code ThreadPoolExecutor} will automatically adjust the 46 * pool size (see {@link #getPoolSize}) 47 * according to the bounds set by 48 * corePoolSize (see {@link #getCorePoolSize}) and 49 * maximumPoolSize (see {@link #getMaximumPoolSize}). 50 * 51 * When a new task is submitted in method {@link #execute}, and fewer 52 * than corePoolSize threads are running, a new thread is created to 53 * handle the request, even if other worker threads are idle. If 54 * there are more than corePoolSize but less than maximumPoolSize 55 * threads running, a new thread will be created only if the queue is 56 * full. By setting corePoolSize and maximumPoolSize the same, you 57 * create a fixed-size thread pool. By setting maximumPoolSize to an 58 * essentially unbounded value such as {@code Integer.MAX_VALUE}, you 59 * allow the pool to accommodate an arbitrary number of concurrent 60 * tasks. Most typically, core and maximum pool sizes are set only 61 * upon construction, but they may also be changed dynamically using 62 * {@link #setCorePoolSize} and {@link #setMaximumPoolSize}. </dd> 63 * 64 * <dt>On-demand construction</dt> 65 * 66 * <dd> By default, even core threads are initially created and 67 * started only when new tasks arrive, but this can be overridden 68 * dynamically using method {@link #prestartCoreThread} or {@link 69 * #prestartAllCoreThreads}. You probably want to prestart threads if 70 * you construct the pool with a non-empty queue. </dd> 71 * 72 * <dt>Creating new threads</dt> 73 * 74 * <dd>New threads are created using a {@link ThreadFactory}. If not 75 * otherwise specified, a {@link Executors#defaultThreadFactory} is 76 * used, that creates threads to all be in the same {@link 77 * ThreadGroup} and with the same {@code NORM_PRIORITY} priority and 78 * non-daemon status. By supplying a different ThreadFactory, you can 79 * alter the thread's name, thread group, priority, daemon status, 80 * etc. If a {@code ThreadFactory} fails to create a thread when asked 81 * by returning null from {@code newThread}, the executor will 82 * continue, but might not be able to execute any tasks.</dd> 83 * 84 * <dt>Keep-alive times</dt> 85 * 86 * <dd>If the pool currently has more than corePoolSize threads, 87 * excess threads will be terminated if they have been idle for more 88 * than the keepAliveTime (see {@link #getKeepAliveTime}). This 89 * provides a means of reducing resource consumption when the pool is 90 * not being actively used. If the pool becomes more active later, new 91 * threads will be constructed. This parameter can also be changed 92 * dynamically using method {@link #setKeepAliveTime}. Using a value 93 * of {@code Long.MAX_VALUE} {@link TimeUnit#NANOSECONDS} effectively 94 * disables idle threads from ever terminating prior to shut down. By 95 * default, the keep-alive policy applies only when there are more 96 * than corePoolSizeThreads. But method {@link 97 * #allowCoreThreadTimeOut(boolean)} can be used to apply this 98 * time-out policy to core threads as well, so long as the 99 * keepAliveTime value is non-zero. </dd> 100 * 101 * <dt>Queuing</dt> 102 * 103 * <dd>Any {@link BlockingQueue} may be used to transfer and hold 104 * submitted tasks. The use of this queue interacts with pool sizing: 105 * 106 * <ul> 107 * 108 * <li> If fewer than corePoolSize threads are running, the Executor 109 * always prefers adding a new thread 110 * rather than queuing.</li> 111 * 112 * <li> If corePoolSize or more threads are running, the Executor 113 * always prefers queuing a request rather than adding a new 114 * thread.</li> 115 * 116 * <li> If a request cannot be queued, a new thread is created unless 117 * this would exceed maximumPoolSize, in which case, the task will be 118 * rejected.</li> 119 * 120 * </ul> 121 * 122 * There are three general strategies for queuing: 123 * <ol> 124 * 125 * <li> <em> Direct handoffs.</em> A good default choice for a work 126 * queue is a {@link SynchronousQueue} that hands off tasks to threads 127 * without otherwise holding them. Here, an attempt to queue a task 128 * will fail if no threads are immediately available to run it, so a 129 * new thread will be constructed. This policy avoids lockups when 130 * handling sets of requests that might have internal dependencies. 131 * Direct handoffs generally require unbounded maximumPoolSizes to 132 * avoid rejection of new submitted tasks. This in turn admits the 133 * possibility of unbounded thread growth when commands continue to 134 * arrive on average faster than they can be processed. </li> 135 * 136 * <li><em> Unbounded queues.</em> Using an unbounded queue (for 137 * example a {@link LinkedBlockingQueue} without a predefined 138 * capacity) will cause new tasks to wait in the queue when all 139 * corePoolSize threads are busy. Thus, no more than corePoolSize 140 * threads will ever be created. (And the value of the maximumPoolSize 141 * therefore doesn't have any effect.) This may be appropriate when 142 * each task is completely independent of others, so tasks cannot 143 * affect each others execution; for example, in a web page server. 144 * While this style of queuing can be useful in smoothing out 145 * transient bursts of requests, it admits the possibility of 146 * unbounded work queue growth when commands continue to arrive on 147 * average faster than they can be processed. </li> 148 * 149 * <li><em>Bounded queues.</em> A bounded queue (for example, an 150 * {@link ArrayBlockingQueue}) helps prevent resource exhaustion when 151 * used with finite maximumPoolSizes, but can be more difficult to 152 * tune and control. Queue sizes and maximum pool sizes may be traded 153 * off for each other: Using large queues and small pools minimizes 154 * CPU usage, OS resources, and context-switching overhead, but can 155 * lead to artificially low throughput. If tasks frequently block (for 156 * example if they are I/O bound), a system may be able to schedule 157 * time for more threads than you otherwise allow. Use of small queues 158 * generally requires larger pool sizes, which keeps CPUs busier but 159 * may encounter unacceptable scheduling overhead, which also 160 * decreases throughput. </li> 161 * 162 * </ol> 163 * 164 * </dd> 165 * 166 * <dt>Rejected tasks</dt> 167 * 168 * <dd> New tasks submitted in method {@link #execute} will be 169 * <em>rejected</em> when the Executor has been shut down, and also 170 * when the Executor uses finite bounds for both maximum threads and 171 * work queue capacity, and is saturated. In either case, the {@code 172 * execute} method invokes the {@link 173 * RejectedExecutionHandler#rejectedExecution} method of its {@link 174 * RejectedExecutionHandler}. Four predefined handler policies are 175 * provided: 176 * 177 * <ol> 178 * 179 * <li> In the default {@link ThreadPoolExecutor.AbortPolicy}, the 180 * handler throws a runtime {@link RejectedExecutionException} upon 181 * rejection. </li> 182 * 183 * <li> In {@link ThreadPoolExecutor.CallerRunsPolicy}, the thread 184 * that invokes {@code execute} itself runs the task. This provides a 185 * simple feedback control mechanism that will slow down the rate that 186 * new tasks are submitted. </li> 187 * 188 * <li> In {@link ThreadPoolExecutor.DiscardPolicy}, a task that 189 * cannot be executed is simply dropped. </li> 190 * 191 * <li>In {@link ThreadPoolExecutor.DiscardOldestPolicy}, if the 192 * executor is not shut down, the task at the head of the work queue 193 * is dropped, and then execution is retried (which can fail again, 194 * causing this to be repeated.) </li> 195 * 196 * </ol> 197 * 198 * It is possible to define and use other kinds of {@link 199 * RejectedExecutionHandler} classes. Doing so requires some care 200 * especially when policies are designed to work only under particular 201 * capacity or queuing policies. </dd> 202 * 203 * <dt>Hook methods</dt> 204 * 205 * <dd>This class provides {@code protected} overridable {@link 206 * #beforeExecute} and {@link #afterExecute} methods that are called 207 * before and after execution of each task. These can be used to 208 * manipulate the execution environment; for example, reinitializing 209 * ThreadLocals, gathering statistics, or adding log 210 * entries. Additionally, method {@link #terminated} can be overridden 211 * to perform any special processing that needs to be done once the 212 * Executor has fully terminated. 213 * 214 * <p>If hook or callback methods throw exceptions, internal worker 215 * threads may in turn fail and abruptly terminate.</dd> 216 * 217 * <dt>Queue maintenance</dt> 218 * 219 * <dd> Method {@link #getQueue} allows access to the work queue for 220 * purposes of monitoring and debugging. Use of this method for any 221 * other purpose is strongly discouraged. Two supplied methods, 222 * {@link #remove} and {@link #purge} are available to assist in 223 * storage reclamation when large numbers of queued tasks become 224 * cancelled.</dd> 225 * 226 * <dt>Finalization</dt> 227 * 228 * <dd> A pool that is no longer referenced in a program <em>AND</em> 229 * has no remaining threads will be {@code shutdown} automatically. If 230 * you would like to ensure that unreferenced pools are reclaimed even 231 * if users forget to call {@link #shutdown}, then you must arrange 232 * that unused threads eventually die, by setting appropriate 233 * keep-alive times, using a lower bound of zero core threads and/or 234 * setting {@link #allowCoreThreadTimeOut(boolean)}. </dd> 235 * 236 * </dl> 237 * 238 * <p> <b>Extension example</b>. Most extensions of this class 239 * override one or more of the protected hook methods. For example, 240 * here is a subclass that adds a simple pause/resume feature: 241 * 242 * <pre> {@code 243 * class PausableThreadPoolExecutor extends ThreadPoolExecutor { 244 * private boolean isPaused; 245 * private ReentrantLock pauseLock = new ReentrantLock(); 246 * private Condition unpaused = pauseLock.newCondition(); 247 * 248 * public PausableThreadPoolExecutor(...) { super(...); } 249 * 250 * protected void beforeExecute(Thread t, Runnable r) { 251 * super.beforeExecute(t, r); 252 * pauseLock.lock(); 253 * try { 254 * while (isPaused) unpaused.await(); 255 * } catch (InterruptedException ie) { 256 * t.interrupt(); 257 * } finally { 258 * pauseLock.unlock(); 259 * } 260 * } 261 * 262 * public void pause() { 263 * pauseLock.lock(); 264 * try { 265 * isPaused = true; 266 * } finally { 267 * pauseLock.unlock(); 268 * } 269 * } 270 * 271 * public void resume() { 272 * pauseLock.lock(); 273 * try { 274 * isPaused = false; 275 * unpaused.signalAll(); 276 * } finally { 277 * pauseLock.unlock(); 278 * } 279 * } 280 * }}</pre> 281 * 282 * @since 1.5 283 * @author Doug Lea 284 */ 285public class ThreadPoolExecutor extends AbstractExecutorService { 286 /** 287 * The main pool control state, ctl, is an atomic integer packing 288 * two conceptual fields 289 * workerCount, indicating the effective number of threads 290 * runState, indicating whether running, shutting down etc 291 * 292 * In order to pack them into one int, we limit workerCount to 293 * (2^29)-1 (about 500 million) threads rather than (2^31)-1 (2 294 * billion) otherwise representable. If this is ever an issue in 295 * the future, the variable can be changed to be an AtomicLong, 296 * and the shift/mask constants below adjusted. But until the need 297 * arises, this code is a bit faster and simpler using an int. 298 * 299 * The workerCount is the number of workers that have been 300 * permitted to start and not permitted to stop. The value may be 301 * transiently different from the actual number of live threads, 302 * for example when a ThreadFactory fails to create a thread when 303 * asked, and when exiting threads are still performing 304 * bookkeeping before terminating. The user-visible pool size is 305 * reported as the current size of the workers set. 306 * 307 * The runState provides the main lifecyle control, taking on values: 308 * 309 * RUNNING: Accept new tasks and process queued tasks 310 * SHUTDOWN: Don't accept new tasks, but process queued tasks 311 * STOP: Don't accept new tasks, don't process queued tasks, 312 * and interrupt in-progress tasks 313 * TIDYING: All tasks have terminated, workerCount is zero, 314 * the thread transitioning to state TIDYING 315 * will run the terminated() hook method 316 * TERMINATED: terminated() has completed 317 * 318 * The numerical order among these values matters, to allow 319 * ordered comparisons. The runState monotonically increases over 320 * time, but need not hit each state. The transitions are: 321 * 322 * RUNNING -> SHUTDOWN 323 * On invocation of shutdown(), perhaps implicitly in finalize() 324 * (RUNNING or SHUTDOWN) -> STOP 325 * On invocation of shutdownNow() 326 * SHUTDOWN -> TIDYING 327 * When both queue and pool are empty 328 * STOP -> TIDYING 329 * When pool is empty 330 * TIDYING -> TERMINATED 331 * When the terminated() hook method has completed 332 * 333 * Threads waiting in awaitTermination() will return when the 334 * state reaches TERMINATED. 335 * 336 * Detecting the transition from SHUTDOWN to TIDYING is less 337 * straightforward than you'd like because the queue may become 338 * empty after non-empty and vice versa during SHUTDOWN state, but 339 * we can only terminate if, after seeing that it is empty, we see 340 * that workerCount is 0 (which sometimes entails a recheck -- see 341 * below). 342 */ 343 private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0)); 344 private static final int COUNT_BITS = Integer.SIZE - 3; 345 private static final int CAPACITY = (1 << COUNT_BITS) - 1; 346 347 // runState is stored in the high-order bits 348 private static final int RUNNING = -1 << COUNT_BITS; 349 private static final int SHUTDOWN = 0 << COUNT_BITS; 350 private static final int STOP = 1 << COUNT_BITS; 351 private static final int TIDYING = 2 << COUNT_BITS; 352 private static final int TERMINATED = 3 << COUNT_BITS; 353 354 // Packing and unpacking ctl 355 private static int runStateOf(int c) { return c & ~CAPACITY; } 356 private static int workerCountOf(int c) { return c & CAPACITY; } 357 private static int ctlOf(int rs, int wc) { return rs | wc; } 358 359 /* 360 * Bit field accessors that don't require unpacking ctl. 361 * These depend on the bit layout and on workerCount being never negative. 362 */ 363 364 private static boolean runStateLessThan(int c, int s) { 365 return c < s; 366 } 367 368 private static boolean runStateAtLeast(int c, int s) { 369 return c >= s; 370 } 371 372 private static boolean isRunning(int c) { 373 return c < SHUTDOWN; 374 } 375 376 /** 377 * Attempt to CAS-increment the workerCount field of ctl. 378 */ 379 private boolean compareAndIncrementWorkerCount(int expect) { 380 return ctl.compareAndSet(expect, expect + 1); 381 } 382 383 /** 384 * Attempt to CAS-decrement the workerCount field of ctl. 385 */ 386 private boolean compareAndDecrementWorkerCount(int expect) { 387 return ctl.compareAndSet(expect, expect - 1); 388 } 389 390 /** 391 * Decrements the workerCount field of ctl. This is called only on 392 * abrupt termination of a thread (see processWorkerExit). Other 393 * decrements are performed within getTask. 394 */ 395 private void decrementWorkerCount() { 396 do {} while (! compareAndDecrementWorkerCount(ctl.get())); 397 } 398 399 /** 400 * The queue used for holding tasks and handing off to worker 401 * threads. We do not require that workQueue.poll() returning 402 * null necessarily means that workQueue.isEmpty(), so rely 403 * solely on isEmpty to see if the queue is empty (which we must 404 * do for example when deciding whether to transition from 405 * SHUTDOWN to TIDYING). This accommodates special-purpose 406 * queues such as DelayQueues for which poll() is allowed to 407 * return null even if it may later return non-null when delays 408 * expire. 409 */ 410 private final BlockingQueue<Runnable> workQueue; 411 412 /** 413 * Lock held on access to workers set and related bookkeeping. 414 * While we could use a concurrent set of some sort, it turns out 415 * to be generally preferable to use a lock. Among the reasons is 416 * that this serializes interruptIdleWorkers, which avoids 417 * unnecessary interrupt storms, especially during shutdown. 418 * Otherwise exiting threads would concurrently interrupt those 419 * that have not yet interrupted. It also simplifies some of the 420 * associated statistics bookkeeping of largestPoolSize etc. We 421 * also hold mainLock on shutdown and shutdownNow, for the sake of 422 * ensuring workers set is stable while separately checking 423 * permission to interrupt and actually interrupting. 424 */ 425 private final ReentrantLock mainLock = new ReentrantLock(); 426 427 /** 428 * Set containing all worker threads in pool. Accessed only when 429 * holding mainLock. 430 */ 431 private final HashSet<Worker> workers = new HashSet<Worker>(); 432 433 /** 434 * Wait condition to support awaitTermination 435 */ 436 private final Condition termination = mainLock.newCondition(); 437 438 /** 439 * Tracks largest attained pool size. Accessed only under 440 * mainLock. 441 */ 442 private int largestPoolSize; 443 444 /** 445 * Counter for completed tasks. Updated only on termination of 446 * worker threads. Accessed only under mainLock. 447 */ 448 private long completedTaskCount; 449 450 /* 451 * All user control parameters are declared as volatiles so that 452 * ongoing actions are based on freshest values, but without need 453 * for locking, since no internal invariants depend on them 454 * changing synchronously with respect to other actions. 455 */ 456 457 /** 458 * Factory for new threads. All threads are created using this 459 * factory (via method addWorker). All callers must be prepared 460 * for addWorker to fail, which may reflect a system or user's 461 * policy limiting the number of threads. Even though it is not 462 * treated as an error, failure to create threads may result in 463 * new tasks being rejected or existing ones remaining stuck in 464 * the queue. On the other hand, no special precautions exist to 465 * handle OutOfMemoryErrors that might be thrown while trying to 466 * create threads, since there is generally no recourse from 467 * within this class. 468 */ 469 private volatile ThreadFactory threadFactory; 470 471 /** 472 * Handler called when saturated or shutdown in execute. 473 */ 474 private volatile RejectedExecutionHandler handler; 475 476 /** 477 * Timeout in nanoseconds for idle threads waiting for work. 478 * Threads use this timeout when there are more than corePoolSize 479 * present or if allowCoreThreadTimeOut. Otherwise they wait 480 * forever for new work. 481 */ 482 private volatile long keepAliveTime; 483 484 /** 485 * If false (default), core threads stay alive even when idle. 486 * If true, core threads use keepAliveTime to time out waiting 487 * for work. 488 */ 489 private volatile boolean allowCoreThreadTimeOut; 490 491 /** 492 * Core pool size is the minimum number of workers to keep alive 493 * (and not allow to time out etc) unless allowCoreThreadTimeOut 494 * is set, in which case the minimum is zero. 495 */ 496 private volatile int corePoolSize; 497 498 /** 499 * Maximum pool size. Note that the actual maximum is internally 500 * bounded by CAPACITY. 501 */ 502 private volatile int maximumPoolSize; 503 504 /** 505 * The default rejected execution handler 506 */ 507 private static final RejectedExecutionHandler defaultHandler = 508 new AbortPolicy(); 509 510 /** 511 * Permission required for callers of shutdown and shutdownNow. 512 * We additionally require (see checkShutdownAccess) that callers 513 * have permission to actually interrupt threads in the worker set 514 * (as governed by Thread.interrupt, which relies on 515 * ThreadGroup.checkAccess, which in turn relies on 516 * SecurityManager.checkAccess). Shutdowns are attempted only if 517 * these checks pass. 518 * 519 * All actual invocations of Thread.interrupt (see 520 * interruptIdleWorkers and interruptWorkers) ignore 521 * SecurityExceptions, meaning that the attempted interrupts 522 * silently fail. In the case of shutdown, they should not fail 523 * unless the SecurityManager has inconsistent policies, sometimes 524 * allowing access to a thread and sometimes not. In such cases, 525 * failure to actually interrupt threads may disable or delay full 526 * termination. Other uses of interruptIdleWorkers are advisory, 527 * and failure to actually interrupt will merely delay response to 528 * configuration changes so is not handled exceptionally. 529 */ 530 private static final RuntimePermission shutdownPerm = 531 new RuntimePermission("modifyThread"); 532 533 /** 534 * Class Worker mainly maintains interrupt control state for 535 * threads running tasks, along with other minor bookkeeping. 536 * This class opportunistically extends AbstractQueuedSynchronizer 537 * to simplify acquiring and releasing a lock surrounding each 538 * task execution. This protects against interrupts that are 539 * intended to wake up a worker thread waiting for a task from 540 * instead interrupting a task being run. We implement a simple 541 * non-reentrant mutual exclusion lock rather than use ReentrantLock 542 * because we do not want worker tasks to be able to reacquire the 543 * lock when they invoke pool control methods like setCorePoolSize. 544 */ 545 private final class Worker 546 extends AbstractQueuedSynchronizer 547 implements Runnable 548 { 549 /** 550 * This class will never be serialized, but we provide a 551 * serialVersionUID to suppress a javac warning. 552 */ 553 private static final long serialVersionUID = 6138294804551838833L; 554 555 /** Thread this worker is running in. Null if factory fails. */ 556 final Thread thread; 557 /** Initial task to run. Possibly null. */ 558 Runnable firstTask; 559 /** Per-thread task counter */ 560 volatile long completedTasks; 561 562 /** 563 * Creates with given first task and thread from ThreadFactory. 564 * @param firstTask the first task (null if none) 565 */ 566 Worker(Runnable firstTask) { 567 this.firstTask = firstTask; 568 this.thread = getThreadFactory().newThread(this); 569 } 570 571 /** Delegates main run loop to outer runWorker */ 572 public void run() { 573 runWorker(this); 574 } 575 576 // Lock methods 577 // 578 // The value 0 represents the unlocked state. 579 // The value 1 represents the locked state. 580 581 protected boolean isHeldExclusively() { 582 return getState() == 1; 583 } 584 585 protected boolean tryAcquire(int unused) { 586 if (compareAndSetState(0, 1)) { 587 setExclusiveOwnerThread(Thread.currentThread()); 588 return true; 589 } 590 return false; 591 } 592 593 protected boolean tryRelease(int unused) { 594 setExclusiveOwnerThread(null); 595 setState(0); 596 return true; 597 } 598 599 public void lock() { acquire(1); } 600 public boolean tryLock() { return tryAcquire(1); } 601 public void unlock() { release(1); } 602 public boolean isLocked() { return isHeldExclusively(); } 603 } 604 605 /* 606 * Methods for setting control state 607 */ 608 609 /** 610 * Transitions runState to given target, or leaves it alone if 611 * already at least the given target. 612 * 613 * @param targetState the desired state, either SHUTDOWN or STOP 614 * (but not TIDYING or TERMINATED -- use tryTerminate for that) 615 */ 616 private void advanceRunState(int targetState) { 617 for (;;) { 618 int c = ctl.get(); 619 if (runStateAtLeast(c, targetState) || 620 ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c)))) 621 break; 622 } 623 } 624 625 /** 626 * Transitions to TERMINATED state if either (SHUTDOWN and pool 627 * and queue empty) or (STOP and pool empty). If otherwise 628 * eligible to terminate but workerCount is nonzero, interrupts an 629 * idle worker to ensure that shutdown signals propagate. This 630 * method must be called following any action that might make 631 * termination possible -- reducing worker count or removing tasks 632 * from the queue during shutdown. The method is non-private to 633 * allow access from ScheduledThreadPoolExecutor. 634 */ 635 final void tryTerminate() { 636 for (;;) { 637 int c = ctl.get(); 638 if (isRunning(c) || 639 runStateAtLeast(c, TIDYING) || 640 (runStateOf(c) == SHUTDOWN && ! workQueue.isEmpty())) 641 return; 642 if (workerCountOf(c) != 0) { // Eligible to terminate 643 interruptIdleWorkers(ONLY_ONE); 644 return; 645 } 646 647 final ReentrantLock mainLock = this.mainLock; 648 mainLock.lock(); 649 try { 650 if (ctl.compareAndSet(c, ctlOf(TIDYING, 0))) { 651 try { 652 terminated(); 653 } finally { 654 ctl.set(ctlOf(TERMINATED, 0)); 655 termination.signalAll(); 656 } 657 return; 658 } 659 } finally { 660 mainLock.unlock(); 661 } 662 // else retry on failed CAS 663 } 664 } 665 666 /* 667 * Methods for controlling interrupts to worker threads. 668 */ 669 670 /** 671 * If there is a security manager, makes sure caller has 672 * permission to shut down threads in general (see shutdownPerm). 673 * If this passes, additionally makes sure the caller is allowed 674 * to interrupt each worker thread. This might not be true even if 675 * first check passed, if the SecurityManager treats some threads 676 * specially. 677 */ 678 private void checkShutdownAccess() { 679 SecurityManager security = System.getSecurityManager(); 680 if (security != null) { 681 security.checkPermission(shutdownPerm); 682 final ReentrantLock mainLock = this.mainLock; 683 mainLock.lock(); 684 try { 685 for (Worker w : workers) 686 security.checkAccess(w.thread); 687 } finally { 688 mainLock.unlock(); 689 } 690 } 691 } 692 693 /** 694 * Interrupts all threads, even if active. Ignores SecurityExceptions 695 * (in which case some threads may remain uninterrupted). 696 */ 697 private void interruptWorkers() { 698 final ReentrantLock mainLock = this.mainLock; 699 mainLock.lock(); 700 try { 701 for (Worker w : workers) { 702 try { 703 w.thread.interrupt(); 704 } catch (SecurityException ignore) { 705 } 706 } 707 } finally { 708 mainLock.unlock(); 709 } 710 } 711 712 /** 713 * Interrupts threads that might be waiting for tasks (as 714 * indicated by not being locked) so they can check for 715 * termination or configuration changes. Ignores 716 * SecurityExceptions (in which case some threads may remain 717 * uninterrupted). 718 * 719 * @param onlyOne If true, interrupt at most one worker. This is 720 * called only from tryTerminate when termination is otherwise 721 * enabled but there are still other workers. In this case, at 722 * most one waiting worker is interrupted to propagate shutdown 723 * signals in case all threads are currently waiting. 724 * Interrupting any arbitrary thread ensures that newly arriving 725 * workers since shutdown began will also eventually exit. 726 * To guarantee eventual termination, it suffices to always 727 * interrupt only one idle worker, but shutdown() interrupts all 728 * idle workers so that redundant workers exit promptly, not 729 * waiting for a straggler task to finish. 730 */ 731 private void interruptIdleWorkers(boolean onlyOne) { 732 final ReentrantLock mainLock = this.mainLock; 733 mainLock.lock(); 734 try { 735 for (Worker w : workers) { 736 Thread t = w.thread; 737 if (!t.isInterrupted() && w.tryLock()) { 738 try { 739 t.interrupt(); 740 } catch (SecurityException ignore) { 741 } finally { 742 w.unlock(); 743 } 744 } 745 if (onlyOne) 746 break; 747 } 748 } finally { 749 mainLock.unlock(); 750 } 751 } 752 753 /** 754 * Common form of interruptIdleWorkers, to avoid having to 755 * remember what the boolean argument means. 756 */ 757 private void interruptIdleWorkers() { 758 interruptIdleWorkers(false); 759 } 760 761 private static final boolean ONLY_ONE = true; 762 763 /** 764 * Ensures that unless the pool is stopping, the current thread 765 * does not have its interrupt set. This requires a double-check 766 * of state in case the interrupt was cleared concurrently with a 767 * shutdownNow -- if so, the interrupt is re-enabled. 768 */ 769 private void clearInterruptsForTaskRun() { 770 if (runStateLessThan(ctl.get(), STOP) && 771 Thread.interrupted() && 772 runStateAtLeast(ctl.get(), STOP)) 773 Thread.currentThread().interrupt(); 774 } 775 776 /* 777 * Misc utilities, most of which are also exported to 778 * ScheduledThreadPoolExecutor 779 */ 780 781 /** 782 * Invokes the rejected execution handler for the given command. 783 * Package-protected for use by ScheduledThreadPoolExecutor. 784 */ 785 final void reject(Runnable command) { 786 handler.rejectedExecution(command, this); 787 } 788 789 /** 790 * Performs any further cleanup following run state transition on 791 * invocation of shutdown. A no-op here, but used by 792 * ScheduledThreadPoolExecutor to cancel delayed tasks. 793 */ 794 void onShutdown() { 795 } 796 797 /** 798 * State check needed by ScheduledThreadPoolExecutor to 799 * enable running tasks during shutdown. 800 * 801 * @param shutdownOK true if should return true if SHUTDOWN 802 */ 803 final boolean isRunningOrShutdown(boolean shutdownOK) { 804 int rs = runStateOf(ctl.get()); 805 return rs == RUNNING || (rs == SHUTDOWN && shutdownOK); 806 } 807 808 /** 809 * Drains the task queue into a new list, normally using 810 * drainTo. But if the queue is a DelayQueue or any other kind of 811 * queue for which poll or drainTo may fail to remove some 812 * elements, it deletes them one by one. 813 */ 814 private List<Runnable> drainQueue() { 815 BlockingQueue<Runnable> q = workQueue; 816 List<Runnable> taskList = new ArrayList<Runnable>(); 817 q.drainTo(taskList); 818 if (!q.isEmpty()) { 819 for (Runnable r : q.toArray(new Runnable[0])) { 820 if (q.remove(r)) 821 taskList.add(r); 822 } 823 } 824 return taskList; 825 } 826 827 /* 828 * Methods for creating, running and cleaning up after workers 829 */ 830 831 /** 832 * Checks if a new worker can be added with respect to current 833 * pool state and the given bound (either core or maximum). If so, 834 * the worker count is adjusted accordingly, and, if possible, a 835 * new worker is created and started running firstTask as its 836 * first task. This method returns false if the pool is stopped or 837 * eligible to shut down. It also returns false if the thread 838 * factory fails to create a thread when asked, which requires a 839 * backout of workerCount, and a recheck for termination, in case 840 * the existence of this worker was holding up termination. 841 * 842 * @param firstTask the task the new thread should run first (or 843 * null if none). Workers are created with an initial first task 844 * (in method execute()) to bypass queuing when there are fewer 845 * than corePoolSize threads (in which case we always start one), 846 * or when the queue is full (in which case we must bypass queue). 847 * Initially idle threads are usually created via 848 * prestartCoreThread or to replace other dying workers. 849 * 850 * @param core if true use corePoolSize as bound, else 851 * maximumPoolSize. (A boolean indicator is used here rather than a 852 * value to ensure reads of fresh values after checking other pool 853 * state). 854 * @return true if successful 855 */ 856 private boolean addWorker(Runnable firstTask, boolean core) { 857 retry: 858 for (;;) { 859 int c = ctl.get(); 860 int rs = runStateOf(c); 861 862 // Check if queue empty only if necessary. 863 if (rs >= SHUTDOWN && 864 ! (rs == SHUTDOWN && 865 firstTask == null && 866 ! workQueue.isEmpty())) 867 return false; 868 869 for (;;) { 870 int wc = workerCountOf(c); 871 if (wc >= CAPACITY || 872 wc >= (core ? corePoolSize : maximumPoolSize)) 873 return false; 874 if (compareAndIncrementWorkerCount(c)) 875 break retry; 876 c = ctl.get(); // Re-read ctl 877 if (runStateOf(c) != rs) 878 continue retry; 879 // else CAS failed due to workerCount change; retry inner loop 880 } 881 } 882 883 Worker w = new Worker(firstTask); 884 Thread t = w.thread; 885 886 final ReentrantLock mainLock = this.mainLock; 887 mainLock.lock(); 888 try { 889 // Recheck while holding lock. 890 // Back out on ThreadFactory failure or if 891 // shut down before lock acquired. 892 int c = ctl.get(); 893 int rs = runStateOf(c); 894 895 if (t == null || 896 (rs >= SHUTDOWN && 897 ! (rs == SHUTDOWN && 898 firstTask == null))) { 899 decrementWorkerCount(); 900 tryTerminate(); 901 return false; 902 } 903 904 workers.add(w); 905 906 int s = workers.size(); 907 if (s > largestPoolSize) 908 largestPoolSize = s; 909 } finally { 910 mainLock.unlock(); 911 } 912 913 t.start(); 914 // It is possible (but unlikely) for a thread to have been 915 // added to workers, but not yet started, during transition to 916 // STOP, which could result in a rare missed interrupt, 917 // because Thread.interrupt is not guaranteed to have any effect 918 // on a non-yet-started Thread (see Thread#interrupt). 919 if (runStateOf(ctl.get()) == STOP && ! t.isInterrupted()) 920 t.interrupt(); 921 922 return true; 923 } 924 925 /** 926 * Performs cleanup and bookkeeping for a dying worker. Called 927 * only from worker threads. Unless completedAbruptly is set, 928 * assumes that workerCount has already been adjusted to account 929 * for exit. This method removes thread from worker set, and 930 * possibly terminates the pool or replaces the worker if either 931 * it exited due to user task exception or if fewer than 932 * corePoolSize workers are running or queue is non-empty but 933 * there are no workers. 934 * 935 * @param w the worker 936 * @param completedAbruptly if the worker died due to user exception 937 */ 938 private void processWorkerExit(Worker w, boolean completedAbruptly) { 939 if (completedAbruptly) // If abrupt, then workerCount wasn't adjusted 940 decrementWorkerCount(); 941 942 final ReentrantLock mainLock = this.mainLock; 943 mainLock.lock(); 944 try { 945 completedTaskCount += w.completedTasks; 946 workers.remove(w); 947 } finally { 948 mainLock.unlock(); 949 } 950 951 tryTerminate(); 952 953 int c = ctl.get(); 954 if (runStateLessThan(c, STOP)) { 955 if (!completedAbruptly) { 956 int min = allowCoreThreadTimeOut ? 0 : corePoolSize; 957 if (min == 0 && ! workQueue.isEmpty()) 958 min = 1; 959 if (workerCountOf(c) >= min) 960 return; // replacement not needed 961 } 962 addWorker(null, false); 963 } 964 } 965 966 /** 967 * Performs blocking or timed wait for a task, depending on 968 * current configuration settings, or returns null if this worker 969 * must exit because of any of: 970 * 1. There are more than maximumPoolSize workers (due to 971 * a call to setMaximumPoolSize). 972 * 2. The pool is stopped. 973 * 3. The pool is shutdown and the queue is empty. 974 * 4. This worker timed out waiting for a task, and timed-out 975 * workers are subject to termination (that is, 976 * {@code allowCoreThreadTimeOut || workerCount > corePoolSize}) 977 * both before and after the timed wait. 978 * 979 * @return task, or null if the worker must exit, in which case 980 * workerCount is decremented 981 */ 982 private Runnable getTask() { 983 boolean timedOut = false; // Did the last poll() time out? 984 985 retry: 986 for (;;) { 987 int c = ctl.get(); 988 int rs = runStateOf(c); 989 990 // Check if queue empty only if necessary. 991 if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) { 992 decrementWorkerCount(); 993 return null; 994 } 995 996 boolean timed; // Are workers subject to culling? 997 998 for (;;) { 999 int wc = workerCountOf(c); 1000 timed = allowCoreThreadTimeOut || wc > corePoolSize; 1001 1002 if (wc <= maximumPoolSize && ! (timedOut && timed)) 1003 break; 1004 if (compareAndDecrementWorkerCount(c)) 1005 return null; 1006 c = ctl.get(); // Re-read ctl 1007 if (runStateOf(c) != rs) 1008 continue retry; 1009 // else CAS failed due to workerCount change; retry inner loop 1010 } 1011 1012 try { 1013 Runnable r = timed ? 1014 workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) : 1015 workQueue.take(); 1016 if (r != null) 1017 return r; 1018 timedOut = true; 1019 } catch (InterruptedException retry) { 1020 timedOut = false; 1021 } 1022 } 1023 } 1024 1025 /** 1026 * Main worker run loop. Repeatedly gets tasks from queue and 1027 * executes them, while coping with a number of issues: 1028 * 1029 * 1. We may start out with an initial task, in which case we 1030 * don't need to get the first one. Otherwise, as long as pool is 1031 * running, we get tasks from getTask. If it returns null then the 1032 * worker exits due to changed pool state or configuration 1033 * parameters. Other exits result from exception throws in 1034 * external code, in which case completedAbruptly holds, which 1035 * usually leads processWorkerExit to replace this thread. 1036 * 1037 * 2. Before running any task, the lock is acquired to prevent 1038 * other pool interrupts while the task is executing, and 1039 * clearInterruptsForTaskRun called to ensure that unless pool is 1040 * stopping, this thread does not have its interrupt set. 1041 * 1042 * 3. Each task run is preceded by a call to beforeExecute, which 1043 * might throw an exception, in which case we cause thread to die 1044 * (breaking loop with completedAbruptly true) without processing 1045 * the task. 1046 * 1047 * 4. Assuming beforeExecute completes normally, we run the task, 1048 * gathering any of its thrown exceptions to send to 1049 * afterExecute. We separately handle RuntimeException, Error 1050 * (both of which the specs guarantee that we trap) and arbitrary 1051 * Throwables. Because we cannot rethrow Throwables within 1052 * Runnable.run, we wrap them within Errors on the way out (to the 1053 * thread's UncaughtExceptionHandler). Any thrown exception also 1054 * conservatively causes thread to die. 1055 * 1056 * 5. After task.run completes, we call afterExecute, which may 1057 * also throw an exception, which will also cause thread to 1058 * die. According to JLS Sec 14.20, this exception is the one that 1059 * will be in effect even if task.run throws. 1060 * 1061 * The net effect of the exception mechanics is that afterExecute 1062 * and the thread's UncaughtExceptionHandler have as accurate 1063 * information as we can provide about any problems encountered by 1064 * user code. 1065 * 1066 * @param w the worker 1067 */ 1068 final void runWorker(Worker w) { 1069 Runnable task = w.firstTask; 1070 w.firstTask = null; 1071 boolean completedAbruptly = true; 1072 try { 1073 while (task != null || (task = getTask()) != null) { 1074 w.lock(); 1075 clearInterruptsForTaskRun(); 1076 try { 1077 beforeExecute(w.thread, task); 1078 Throwable thrown = null; 1079 try { 1080 task.run(); 1081 } catch (RuntimeException x) { 1082 thrown = x; throw x; 1083 } catch (Error x) { 1084 thrown = x; throw x; 1085 } catch (Throwable x) { 1086 thrown = x; throw new Error(x); 1087 } finally { 1088 afterExecute(task, thrown); 1089 } 1090 } finally { 1091 task = null; 1092 w.completedTasks++; 1093 w.unlock(); 1094 } 1095 } 1096 completedAbruptly = false; 1097 } finally { 1098 processWorkerExit(w, completedAbruptly); 1099 } 1100 } 1101 1102 // Public constructors and methods 1103 1104 /** 1105 * Creates a new {@code ThreadPoolExecutor} with the given initial 1106 * parameters and default thread factory and rejected execution handler. 1107 * It may be more convenient to use one of the {@link Executors} factory 1108 * methods instead of this general purpose constructor. 1109 * 1110 * @param corePoolSize the number of threads to keep in the pool, even 1111 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1112 * @param maximumPoolSize the maximum number of threads to allow in the 1113 * pool 1114 * @param keepAliveTime when the number of threads is greater than 1115 * the core, this is the maximum time that excess idle threads 1116 * will wait for new tasks before terminating. 1117 * @param unit the time unit for the {@code keepAliveTime} argument 1118 * @param workQueue the queue to use for holding tasks before they are 1119 * executed. This queue will hold only the {@code Runnable} 1120 * tasks submitted by the {@code execute} method. 1121 * @throws IllegalArgumentException if one of the following holds:<br> 1122 * {@code corePoolSize < 0}<br> 1123 * {@code keepAliveTime < 0}<br> 1124 * {@code maximumPoolSize <= 0}<br> 1125 * {@code maximumPoolSize < corePoolSize} 1126 * @throws NullPointerException if {@code workQueue} is null 1127 */ 1128 public ThreadPoolExecutor(int corePoolSize, 1129 int maximumPoolSize, 1130 long keepAliveTime, 1131 TimeUnit unit, 1132 BlockingQueue<Runnable> workQueue) { 1133 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, 1134 Executors.defaultThreadFactory(), defaultHandler); 1135 } 1136 1137 /** 1138 * Creates a new {@code ThreadPoolExecutor} with the given initial 1139 * parameters and default rejected execution handler. 1140 * 1141 * @param corePoolSize the number of threads to keep in the pool, even 1142 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1143 * @param maximumPoolSize the maximum number of threads to allow in the 1144 * pool 1145 * @param keepAliveTime when the number of threads is greater than 1146 * the core, this is the maximum time that excess idle threads 1147 * will wait for new tasks before terminating. 1148 * @param unit the time unit for the {@code keepAliveTime} argument 1149 * @param workQueue the queue to use for holding tasks before they are 1150 * executed. This queue will hold only the {@code Runnable} 1151 * tasks submitted by the {@code execute} method. 1152 * @param threadFactory the factory to use when the executor 1153 * creates a new thread 1154 * @throws IllegalArgumentException if one of the following holds:<br> 1155 * {@code corePoolSize < 0}<br> 1156 * {@code keepAliveTime < 0}<br> 1157 * {@code maximumPoolSize <= 0}<br> 1158 * {@code maximumPoolSize < corePoolSize} 1159 * @throws NullPointerException if {@code workQueue} 1160 * or {@code threadFactory} is null 1161 */ 1162 public ThreadPoolExecutor(int corePoolSize, 1163 int maximumPoolSize, 1164 long keepAliveTime, 1165 TimeUnit unit, 1166 BlockingQueue<Runnable> workQueue, 1167 ThreadFactory threadFactory) { 1168 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, 1169 threadFactory, defaultHandler); 1170 } 1171 1172 /** 1173 * Creates a new {@code ThreadPoolExecutor} with the given initial 1174 * parameters and default thread factory. 1175 * 1176 * @param corePoolSize the number of threads to keep in the pool, even 1177 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1178 * @param maximumPoolSize the maximum number of threads to allow in the 1179 * pool 1180 * @param keepAliveTime when the number of threads is greater than 1181 * the core, this is the maximum time that excess idle threads 1182 * will wait for new tasks before terminating. 1183 * @param unit the time unit for the {@code keepAliveTime} argument 1184 * @param workQueue the queue to use for holding tasks before they are 1185 * executed. This queue will hold only the {@code Runnable} 1186 * tasks submitted by the {@code execute} method. 1187 * @param handler the handler to use when execution is blocked 1188 * because the thread bounds and queue capacities are reached 1189 * @throws IllegalArgumentException if one of the following holds:<br> 1190 * {@code corePoolSize < 0}<br> 1191 * {@code keepAliveTime < 0}<br> 1192 * {@code maximumPoolSize <= 0}<br> 1193 * {@code maximumPoolSize < corePoolSize} 1194 * @throws NullPointerException if {@code workQueue} 1195 * or {@code handler} is null 1196 */ 1197 public ThreadPoolExecutor(int corePoolSize, 1198 int maximumPoolSize, 1199 long keepAliveTime, 1200 TimeUnit unit, 1201 BlockingQueue<Runnable> workQueue, 1202 RejectedExecutionHandler handler) { 1203 this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, 1204 Executors.defaultThreadFactory(), handler); 1205 } 1206 1207 /** 1208 * Creates a new {@code ThreadPoolExecutor} with the given initial 1209 * parameters. 1210 * 1211 * @param corePoolSize the number of threads to keep in the pool, even 1212 * if they are idle, unless {@code allowCoreThreadTimeOut} is set 1213 * @param maximumPoolSize the maximum number of threads to allow in the 1214 * pool 1215 * @param keepAliveTime when the number of threads is greater than 1216 * the core, this is the maximum time that excess idle threads 1217 * will wait for new tasks before terminating. 1218 * @param unit the time unit for the {@code keepAliveTime} argument 1219 * @param workQueue the queue to use for holding tasks before they are 1220 * executed. This queue will hold only the {@code Runnable} 1221 * tasks submitted by the {@code execute} method. 1222 * @param threadFactory the factory to use when the executor 1223 * creates a new thread 1224 * @param handler the handler to use when execution is blocked 1225 * because the thread bounds and queue capacities are reached 1226 * @throws IllegalArgumentException if one of the following holds:<br> 1227 * {@code corePoolSize < 0}<br> 1228 * {@code keepAliveTime < 0}<br> 1229 * {@code maximumPoolSize <= 0}<br> 1230 * {@code maximumPoolSize < corePoolSize} 1231 * @throws NullPointerException if {@code workQueue} 1232 * or {@code threadFactory} or {@code handler} is null 1233 */ 1234 public ThreadPoolExecutor(int corePoolSize, 1235 int maximumPoolSize, 1236 long keepAliveTime, 1237 TimeUnit unit, 1238 BlockingQueue<Runnable> workQueue, 1239 ThreadFactory threadFactory, 1240 RejectedExecutionHandler handler) { 1241 if (corePoolSize < 0 || 1242 maximumPoolSize <= 0 || 1243 maximumPoolSize < corePoolSize || 1244 keepAliveTime < 0) 1245 throw new IllegalArgumentException(); 1246 if (workQueue == null || threadFactory == null || handler == null) 1247 throw new NullPointerException(); 1248 this.corePoolSize = corePoolSize; 1249 this.maximumPoolSize = maximumPoolSize; 1250 this.workQueue = workQueue; 1251 this.keepAliveTime = unit.toNanos(keepAliveTime); 1252 this.threadFactory = threadFactory; 1253 this.handler = handler; 1254 } 1255 1256 /** 1257 * Executes the given task sometime in the future. The task 1258 * may execute in a new thread or in an existing pooled thread. 1259 * 1260 * If the task cannot be submitted for execution, either because this 1261 * executor has been shutdown or because its capacity has been reached, 1262 * the task is handled by the current {@code RejectedExecutionHandler}. 1263 * 1264 * @param command the task to execute 1265 * @throws RejectedExecutionException at discretion of 1266 * {@code RejectedExecutionHandler}, if the task 1267 * cannot be accepted for execution 1268 * @throws NullPointerException if {@code command} is null 1269 */ 1270 public void execute(Runnable command) { 1271 if (command == null) 1272 throw new NullPointerException(); 1273 /* 1274 * Proceed in 3 steps: 1275 * 1276 * 1. If fewer than corePoolSize threads are running, try to 1277 * start a new thread with the given command as its first 1278 * task. The call to addWorker atomically checks runState and 1279 * workerCount, and so prevents false alarms that would add 1280 * threads when it shouldn't, by returning false. 1281 * 1282 * 2. If a task can be successfully queued, then we still need 1283 * to double-check whether we should have added a thread 1284 * (because existing ones died since last checking) or that 1285 * the pool shut down since entry into this method. So we 1286 * recheck state and if necessary roll back the enqueuing if 1287 * stopped, or start a new thread if there are none. 1288 * 1289 * 3. If we cannot queue task, then we try to add a new 1290 * thread. If it fails, we know we are shut down or saturated 1291 * and so reject the task. 1292 */ 1293 int c = ctl.get(); 1294 if (workerCountOf(c) < corePoolSize) { 1295 if (addWorker(command, true)) 1296 return; 1297 c = ctl.get(); 1298 } 1299 if (isRunning(c) && workQueue.offer(command)) { 1300 int recheck = ctl.get(); 1301 if (! isRunning(recheck) && remove(command)) 1302 reject(command); 1303 else if (workerCountOf(recheck) == 0) 1304 addWorker(null, false); 1305 } 1306 else if (!addWorker(command, false)) 1307 reject(command); 1308 } 1309 1310 /** 1311 * Initiates an orderly shutdown in which previously submitted 1312 * tasks are executed, but no new tasks will be accepted. 1313 * Invocation has no additional effect if already shut down. 1314 * 1315 * <p>This method does not wait for previously submitted tasks to 1316 * complete execution. Use {@link #awaitTermination awaitTermination} 1317 * to do that. 1318 */ 1319 public void shutdown() { 1320 final ReentrantLock mainLock = this.mainLock; 1321 mainLock.lock(); 1322 try { 1323 checkShutdownAccess(); 1324 advanceRunState(SHUTDOWN); 1325 interruptIdleWorkers(); 1326 onShutdown(); // hook for ScheduledThreadPoolExecutor 1327 } finally { 1328 mainLock.unlock(); 1329 } 1330 tryTerminate(); 1331 } 1332 1333 /** 1334 * Attempts to stop all actively executing tasks, halts the 1335 * processing of waiting tasks, and returns a list of the tasks 1336 * that were awaiting execution. These tasks are drained (removed) 1337 * from the task queue upon return from this method. 1338 * 1339 * <p>This method does not wait for actively executing tasks to 1340 * terminate. Use {@link #awaitTermination awaitTermination} to 1341 * do that. 1342 * 1343 * <p>There are no guarantees beyond best-effort attempts to stop 1344 * processing actively executing tasks. This implementation 1345 * cancels tasks via {@link Thread#interrupt}, so any task that 1346 * fails to respond to interrupts may never terminate. 1347 */ 1348 public List<Runnable> shutdownNow() { 1349 List<Runnable> tasks; 1350 final ReentrantLock mainLock = this.mainLock; 1351 mainLock.lock(); 1352 try { 1353 checkShutdownAccess(); 1354 advanceRunState(STOP); 1355 interruptWorkers(); 1356 tasks = drainQueue(); 1357 } finally { 1358 mainLock.unlock(); 1359 } 1360 tryTerminate(); 1361 return tasks; 1362 } 1363 1364 public boolean isShutdown() { 1365 return ! isRunning(ctl.get()); 1366 } 1367 1368 /** 1369 * Returns true if this executor is in the process of terminating 1370 * after {@link #shutdown} or {@link #shutdownNow} but has not 1371 * completely terminated. This method may be useful for 1372 * debugging. A return of {@code true} reported a sufficient 1373 * period after shutdown may indicate that submitted tasks have 1374 * ignored or suppressed interruption, causing this executor not 1375 * to properly terminate. 1376 * 1377 * @return true if terminating but not yet terminated 1378 */ 1379 public boolean isTerminating() { 1380 int c = ctl.get(); 1381 return ! isRunning(c) && runStateLessThan(c, TERMINATED); 1382 } 1383 1384 public boolean isTerminated() { 1385 return runStateAtLeast(ctl.get(), TERMINATED); 1386 } 1387 1388 public boolean awaitTermination(long timeout, TimeUnit unit) 1389 throws InterruptedException { 1390 long nanos = unit.toNanos(timeout); 1391 final ReentrantLock mainLock = this.mainLock; 1392 mainLock.lock(); 1393 try { 1394 for (;;) { 1395 if (runStateAtLeast(ctl.get(), TERMINATED)) 1396 return true; 1397 if (nanos <= 0) 1398 return false; 1399 nanos = termination.awaitNanos(nanos); 1400 } 1401 } finally { 1402 mainLock.unlock(); 1403 } 1404 } 1405 1406 /** 1407 * Invokes {@code shutdown} when this executor is no longer 1408 * referenced and it has no threads. 1409 */ 1410 protected void finalize() { 1411 shutdown(); 1412 } 1413 1414 /** 1415 * Sets the thread factory used to create new threads. 1416 * 1417 * @param threadFactory the new thread factory 1418 * @throws NullPointerException if threadFactory is null 1419 * @see #getThreadFactory 1420 */ 1421 public void setThreadFactory(ThreadFactory threadFactory) { 1422 if (threadFactory == null) 1423 throw new NullPointerException(); 1424 this.threadFactory = threadFactory; 1425 } 1426 1427 /** 1428 * Returns the thread factory used to create new threads. 1429 * 1430 * @return the current thread factory 1431 * @see #setThreadFactory 1432 */ 1433 public ThreadFactory getThreadFactory() { 1434 return threadFactory; 1435 } 1436 1437 /** 1438 * Sets a new handler for unexecutable tasks. 1439 * 1440 * @param handler the new handler 1441 * @throws NullPointerException if handler is null 1442 * @see #getRejectedExecutionHandler 1443 */ 1444 public void setRejectedExecutionHandler(RejectedExecutionHandler handler) { 1445 if (handler == null) 1446 throw new NullPointerException(); 1447 this.handler = handler; 1448 } 1449 1450 /** 1451 * Returns the current handler for unexecutable tasks. 1452 * 1453 * @return the current handler 1454 * @see #setRejectedExecutionHandler 1455 */ 1456 public RejectedExecutionHandler getRejectedExecutionHandler() { 1457 return handler; 1458 } 1459 1460 /** 1461 * Sets the core number of threads. This overrides any value set 1462 * in the constructor. If the new value is smaller than the 1463 * current value, excess existing threads will be terminated when 1464 * they next become idle. If larger, new threads will, if needed, 1465 * be started to execute any queued tasks. 1466 * 1467 * @param corePoolSize the new core size 1468 * @throws IllegalArgumentException if {@code corePoolSize < 0} 1469 * @see #getCorePoolSize 1470 */ 1471 public void setCorePoolSize(int corePoolSize) { 1472 if (corePoolSize < 0) 1473 throw new IllegalArgumentException(); 1474 int delta = corePoolSize - this.corePoolSize; 1475 this.corePoolSize = corePoolSize; 1476 if (workerCountOf(ctl.get()) > corePoolSize) 1477 interruptIdleWorkers(); 1478 else if (delta > 0) { 1479 // We don't really know how many new threads are "needed". 1480 // As a heuristic, prestart enough new workers (up to new 1481 // core size) to handle the current number of tasks in 1482 // queue, but stop if queue becomes empty while doing so. 1483 int k = Math.min(delta, workQueue.size()); 1484 while (k-- > 0 && addWorker(null, true)) { 1485 if (workQueue.isEmpty()) 1486 break; 1487 } 1488 } 1489 } 1490 1491 /** 1492 * Returns the core number of threads. 1493 * 1494 * @return the core number of threads 1495 * @see #setCorePoolSize 1496 */ 1497 public int getCorePoolSize() { 1498 return corePoolSize; 1499 } 1500 1501 /** 1502 * Starts a core thread, causing it to idly wait for work. This 1503 * overrides the default policy of starting core threads only when 1504 * new tasks are executed. This method will return {@code false} 1505 * if all core threads have already been started. 1506 * 1507 * @return {@code true} if a thread was started 1508 */ 1509 public boolean prestartCoreThread() { 1510 return workerCountOf(ctl.get()) < corePoolSize && 1511 addWorker(null, true); 1512 } 1513 1514 /** 1515 * Same as prestartCoreThread except arranges that at least one 1516 * thread is started even if corePoolSize is 0. 1517 */ 1518 void ensurePrestart() { 1519 int wc = workerCountOf(ctl.get()); 1520 if (wc < corePoolSize) 1521 addWorker(null, true); 1522 else if (wc == 0) 1523 addWorker(null, false); 1524 } 1525 1526 /** 1527 * Starts all core threads, causing them to idly wait for work. This 1528 * overrides the default policy of starting core threads only when 1529 * new tasks are executed. 1530 * 1531 * @return the number of threads started 1532 */ 1533 public int prestartAllCoreThreads() { 1534 int n = 0; 1535 while (addWorker(null, true)) 1536 ++n; 1537 return n; 1538 } 1539 1540 /** 1541 * Returns true if this pool allows core threads to time out and 1542 * terminate if no tasks arrive within the keepAlive time, being 1543 * replaced if needed when new tasks arrive. When true, the same 1544 * keep-alive policy applying to non-core threads applies also to 1545 * core threads. When false (the default), core threads are never 1546 * terminated due to lack of incoming tasks. 1547 * 1548 * @return {@code true} if core threads are allowed to time out, 1549 * else {@code false} 1550 * 1551 * @since 1.6 1552 */ 1553 public boolean allowsCoreThreadTimeOut() { 1554 return allowCoreThreadTimeOut; 1555 } 1556 1557 /** 1558 * Sets the policy governing whether core threads may time out and 1559 * terminate if no tasks arrive within the keep-alive time, being 1560 * replaced if needed when new tasks arrive. When false, core 1561 * threads are never terminated due to lack of incoming 1562 * tasks. When true, the same keep-alive policy applying to 1563 * non-core threads applies also to core threads. To avoid 1564 * continual thread replacement, the keep-alive time must be 1565 * greater than zero when setting {@code true}. This method 1566 * should in general be called before the pool is actively used. 1567 * 1568 * @param value {@code true} if should time out, else {@code false} 1569 * @throws IllegalArgumentException if value is {@code true} 1570 * and the current keep-alive time is not greater than zero 1571 * 1572 * @since 1.6 1573 */ 1574 public void allowCoreThreadTimeOut(boolean value) { 1575 if (value && keepAliveTime <= 0) 1576 throw new IllegalArgumentException("Core threads must have nonzero keep alive times"); 1577 if (value != allowCoreThreadTimeOut) { 1578 allowCoreThreadTimeOut = value; 1579 if (value) 1580 interruptIdleWorkers(); 1581 } 1582 } 1583 1584 /** 1585 * Sets the maximum allowed number of threads. This overrides any 1586 * value set in the constructor. If the new value is smaller than 1587 * the current value, excess existing threads will be 1588 * terminated when they next become idle. 1589 * 1590 * @param maximumPoolSize the new maximum 1591 * @throws IllegalArgumentException if the new maximum is 1592 * less than or equal to zero, or 1593 * less than the {@linkplain #getCorePoolSize core pool size} 1594 * @see #getMaximumPoolSize 1595 */ 1596 public void setMaximumPoolSize(int maximumPoolSize) { 1597 if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize) 1598 throw new IllegalArgumentException(); 1599 this.maximumPoolSize = maximumPoolSize; 1600 if (workerCountOf(ctl.get()) > maximumPoolSize) 1601 interruptIdleWorkers(); 1602 } 1603 1604 /** 1605 * Returns the maximum allowed number of threads. 1606 * 1607 * @return the maximum allowed number of threads 1608 * @see #setMaximumPoolSize 1609 */ 1610 public int getMaximumPoolSize() { 1611 return maximumPoolSize; 1612 } 1613 1614 /** 1615 * Sets the time limit for which threads may remain idle before 1616 * being terminated. If there are more than the core number of 1617 * threads currently in the pool, after waiting this amount of 1618 * time without processing a task, excess threads will be 1619 * terminated. This overrides any value set in the constructor. 1620 * 1621 * @param time the time to wait. A time value of zero will cause 1622 * excess threads to terminate immediately after executing tasks. 1623 * @param unit the time unit of the {@code time} argument 1624 * @throws IllegalArgumentException if {@code time} less than zero or 1625 * if {@code time} is zero and {@code allowsCoreThreadTimeOut} 1626 * @see #getKeepAliveTime 1627 */ 1628 public void setKeepAliveTime(long time, TimeUnit unit) { 1629 if (time < 0) 1630 throw new IllegalArgumentException(); 1631 if (time == 0 && allowsCoreThreadTimeOut()) 1632 throw new IllegalArgumentException("Core threads must have nonzero keep alive times"); 1633 long keepAliveTime = unit.toNanos(time); 1634 long delta = keepAliveTime - this.keepAliveTime; 1635 this.keepAliveTime = keepAliveTime; 1636 if (delta < 0) 1637 interruptIdleWorkers(); 1638 } 1639 1640 /** 1641 * Returns the thread keep-alive time, which is the amount of time 1642 * that threads in excess of the core pool size may remain 1643 * idle before being terminated. 1644 * 1645 * @param unit the desired time unit of the result 1646 * @return the time limit 1647 * @see #setKeepAliveTime 1648 */ 1649 public long getKeepAliveTime(TimeUnit unit) { 1650 return unit.convert(keepAliveTime, TimeUnit.NANOSECONDS); 1651 } 1652 1653 /* User-level queue utilities */ 1654 1655 /** 1656 * Returns the task queue used by this executor. Access to the 1657 * task queue is intended primarily for debugging and monitoring. 1658 * This queue may be in active use. Retrieving the task queue 1659 * does not prevent queued tasks from executing. 1660 * 1661 * @return the task queue 1662 */ 1663 public BlockingQueue<Runnable> getQueue() { 1664 return workQueue; 1665 } 1666 1667 /** 1668 * Removes this task from the executor's internal queue if it is 1669 * present, thus causing it not to be run if it has not already 1670 * started. 1671 * 1672 * <p> This method may be useful as one part of a cancellation 1673 * scheme. It may fail to remove tasks that have been converted 1674 * into other forms before being placed on the internal queue. For 1675 * example, a task entered using {@code submit} might be 1676 * converted into a form that maintains {@code Future} status. 1677 * However, in such cases, method {@link #purge} may be used to 1678 * remove those Futures that have been cancelled. 1679 * 1680 * @param task the task to remove 1681 * @return true if the task was removed 1682 */ 1683 public boolean remove(Runnable task) { 1684 boolean removed = workQueue.remove(task); 1685 tryTerminate(); // In case SHUTDOWN and now empty 1686 return removed; 1687 } 1688 1689 /** 1690 * Tries to remove from the work queue all {@link Future} 1691 * tasks that have been cancelled. This method can be useful as a 1692 * storage reclamation operation, that has no other impact on 1693 * functionality. Cancelled tasks are never executed, but may 1694 * accumulate in work queues until worker threads can actively 1695 * remove them. Invoking this method instead tries to remove them now. 1696 * However, this method may fail to remove tasks in 1697 * the presence of interference by other threads. 1698 */ 1699 public void purge() { 1700 final BlockingQueue<Runnable> q = workQueue; 1701 try { 1702 Iterator<Runnable> it = q.iterator(); 1703 while (it.hasNext()) { 1704 Runnable r = it.next(); 1705 if (r instanceof Future<?> && ((Future<?>)r).isCancelled()) 1706 it.remove(); 1707 } 1708 } catch (ConcurrentModificationException fallThrough) { 1709 // Take slow path if we encounter interference during traversal. 1710 // Make copy for traversal and call remove for cancelled entries. 1711 // The slow path is more likely to be O(N*N). 1712 for (Object r : q.toArray()) 1713 if (r instanceof Future<?> && ((Future<?>)r).isCancelled()) 1714 q.remove(r); 1715 } 1716 1717 tryTerminate(); // In case SHUTDOWN and now empty 1718 } 1719 1720 /* Statistics */ 1721 1722 /** 1723 * Returns the current number of threads in the pool. 1724 * 1725 * @return the number of threads 1726 */ 1727 public int getPoolSize() { 1728 final ReentrantLock mainLock = this.mainLock; 1729 mainLock.lock(); 1730 try { 1731 // Remove rare and surprising possibility of 1732 // isTerminated() && getPoolSize() > 0 1733 return runStateAtLeast(ctl.get(), TIDYING) ? 0 1734 : workers.size(); 1735 } finally { 1736 mainLock.unlock(); 1737 } 1738 } 1739 1740 /** 1741 * Returns the approximate number of threads that are actively 1742 * executing tasks. 1743 * 1744 * @return the number of threads 1745 */ 1746 public int getActiveCount() { 1747 final ReentrantLock mainLock = this.mainLock; 1748 mainLock.lock(); 1749 try { 1750 int n = 0; 1751 for (Worker w : workers) 1752 if (w.isLocked()) 1753 ++n; 1754 return n; 1755 } finally { 1756 mainLock.unlock(); 1757 } 1758 } 1759 1760 /** 1761 * Returns the largest number of threads that have ever 1762 * simultaneously been in the pool. 1763 * 1764 * @return the number of threads 1765 */ 1766 public int getLargestPoolSize() { 1767 final ReentrantLock mainLock = this.mainLock; 1768 mainLock.lock(); 1769 try { 1770 return largestPoolSize; 1771 } finally { 1772 mainLock.unlock(); 1773 } 1774 } 1775 1776 /** 1777 * Returns the approximate total number of tasks that have ever been 1778 * scheduled for execution. Because the states of tasks and 1779 * threads may change dynamically during computation, the returned 1780 * value is only an approximation. 1781 * 1782 * @return the number of tasks 1783 */ 1784 public long getTaskCount() { 1785 final ReentrantLock mainLock = this.mainLock; 1786 mainLock.lock(); 1787 try { 1788 long n = completedTaskCount; 1789 for (Worker w : workers) { 1790 n += w.completedTasks; 1791 if (w.isLocked()) 1792 ++n; 1793 } 1794 return n + workQueue.size(); 1795 } finally { 1796 mainLock.unlock(); 1797 } 1798 } 1799 1800 /** 1801 * Returns the approximate total number of tasks that have 1802 * completed execution. Because the states of tasks and threads 1803 * may change dynamically during computation, the returned value 1804 * is only an approximation, but one that does not ever decrease 1805 * across successive calls. 1806 * 1807 * @return the number of tasks 1808 */ 1809 public long getCompletedTaskCount() { 1810 final ReentrantLock mainLock = this.mainLock; 1811 mainLock.lock(); 1812 try { 1813 long n = completedTaskCount; 1814 for (Worker w : workers) 1815 n += w.completedTasks; 1816 return n; 1817 } finally { 1818 mainLock.unlock(); 1819 } 1820 } 1821 1822 /** 1823 * Returns a string identifying this pool, as well as its state, 1824 * including indications of run state and estimated worker and 1825 * task counts. 1826 * 1827 * @return a string identifying this pool, as well as its state 1828 */ 1829 public String toString() { 1830 long ncompleted; 1831 int nworkers, nactive; 1832 final ReentrantLock mainLock = this.mainLock; 1833 mainLock.lock(); 1834 try { 1835 ncompleted = completedTaskCount; 1836 nactive = 0; 1837 nworkers = workers.size(); 1838 for (Worker w : workers) { 1839 ncompleted += w.completedTasks; 1840 if (w.isLocked()) 1841 ++nactive; 1842 } 1843 } finally { 1844 mainLock.unlock(); 1845 } 1846 int c = ctl.get(); 1847 String rs = (runStateLessThan(c, SHUTDOWN) ? "Running" : 1848 (runStateAtLeast(c, TERMINATED) ? "Terminated" : 1849 "Shutting down")); 1850 return super.toString() + 1851 "[" + rs + 1852 ", pool size = " + nworkers + 1853 ", active threads = " + nactive + 1854 ", queued tasks = " + workQueue.size() + 1855 ", completed tasks = " + ncompleted + 1856 "]"; 1857 } 1858 1859 /* Extension hooks */ 1860 1861 /** 1862 * Method invoked prior to executing the given Runnable in the 1863 * given thread. This method is invoked by thread {@code t} that 1864 * will execute task {@code r}, and may be used to re-initialize 1865 * ThreadLocals, or to perform logging. 1866 * 1867 * <p>This implementation does nothing, but may be customized in 1868 * subclasses. Note: To properly nest multiple overridings, subclasses 1869 * should generally invoke {@code super.beforeExecute} at the end of 1870 * this method. 1871 * 1872 * @param t the thread that will run task {@code r} 1873 * @param r the task that will be executed 1874 */ 1875 protected void beforeExecute(Thread t, Runnable r) { } 1876 1877 /** 1878 * Method invoked upon completion of execution of the given Runnable. 1879 * This method is invoked by the thread that executed the task. If 1880 * non-null, the Throwable is the uncaught {@code RuntimeException} 1881 * or {@code Error} that caused execution to terminate abruptly. 1882 * 1883 * <p>This implementation does nothing, but may be customized in 1884 * subclasses. Note: To properly nest multiple overridings, subclasses 1885 * should generally invoke {@code super.afterExecute} at the 1886 * beginning of this method. 1887 * 1888 * <p><b>Note:</b> When actions are enclosed in tasks (such as 1889 * {@link FutureTask}) either explicitly or via methods such as 1890 * {@code submit}, these task objects catch and maintain 1891 * computational exceptions, and so they do not cause abrupt 1892 * termination, and the internal exceptions are <em>not</em> 1893 * passed to this method. If you would like to trap both kinds of 1894 * failures in this method, you can further probe for such cases, 1895 * as in this sample subclass that prints either the direct cause 1896 * or the underlying exception if a task has been aborted: 1897 * 1898 * <pre> {@code 1899 * class ExtendedExecutor extends ThreadPoolExecutor { 1900 * // ... 1901 * protected void afterExecute(Runnable r, Throwable t) { 1902 * super.afterExecute(r, t); 1903 * if (t == null && r instanceof Future<?>) { 1904 * try { 1905 * Object result = ((Future<?>) r).get(); 1906 * } catch (CancellationException ce) { 1907 * t = ce; 1908 * } catch (ExecutionException ee) { 1909 * t = ee.getCause(); 1910 * } catch (InterruptedException ie) { 1911 * Thread.currentThread().interrupt(); // ignore/reset 1912 * } 1913 * } 1914 * if (t != null) 1915 * System.out.println(t); 1916 * } 1917 * }}</pre> 1918 * 1919 * @param r the runnable that has completed 1920 * @param t the exception that caused termination, or null if 1921 * execution completed normally 1922 */ 1923 protected void afterExecute(Runnable r, Throwable t) { } 1924 1925 /** 1926 * Method invoked when the Executor has terminated. Default 1927 * implementation does nothing. Note: To properly nest multiple 1928 * overridings, subclasses should generally invoke 1929 * {@code super.terminated} within this method. 1930 */ 1931 protected void terminated() { } 1932 1933 /* Predefined RejectedExecutionHandlers */ 1934 1935 /** 1936 * A handler for rejected tasks that runs the rejected task 1937 * directly in the calling thread of the {@code execute} method, 1938 * unless the executor has been shut down, in which case the task 1939 * is discarded. 1940 */ 1941 public static class CallerRunsPolicy implements RejectedExecutionHandler { 1942 /** 1943 * Creates a {@code CallerRunsPolicy}. 1944 */ 1945 public CallerRunsPolicy() { } 1946 1947 /** 1948 * Executes task r in the caller's thread, unless the executor 1949 * has been shut down, in which case the task is discarded. 1950 * 1951 * @param r the runnable task requested to be executed 1952 * @param e the executor attempting to execute this task 1953 */ 1954 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 1955 if (!e.isShutdown()) { 1956 r.run(); 1957 } 1958 } 1959 } 1960 1961 /** 1962 * A handler for rejected tasks that throws a 1963 * {@code RejectedExecutionException}. 1964 */ 1965 public static class AbortPolicy implements RejectedExecutionHandler { 1966 /** 1967 * Creates an {@code AbortPolicy}. 1968 */ 1969 public AbortPolicy() { } 1970 1971 /** 1972 * Always throws RejectedExecutionException. 1973 * 1974 * @param r the runnable task requested to be executed 1975 * @param e the executor attempting to execute this task 1976 * @throws RejectedExecutionException always. 1977 */ 1978 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 1979 throw new RejectedExecutionException("Task " + r.toString() + 1980 " rejected from " + 1981 e.toString()); 1982 } 1983 } 1984 1985 /** 1986 * A handler for rejected tasks that silently discards the 1987 * rejected task. 1988 */ 1989 public static class DiscardPolicy implements RejectedExecutionHandler { 1990 /** 1991 * Creates a {@code DiscardPolicy}. 1992 */ 1993 public DiscardPolicy() { } 1994 1995 /** 1996 * Does nothing, which has the effect of discarding task r. 1997 * 1998 * @param r the runnable task requested to be executed 1999 * @param e the executor attempting to execute this task 2000 */ 2001 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2002 } 2003 } 2004 2005 /** 2006 * A handler for rejected tasks that discards the oldest unhandled 2007 * request and then retries {@code execute}, unless the executor 2008 * is shut down, in which case the task is discarded. 2009 */ 2010 public static class DiscardOldestPolicy implements RejectedExecutionHandler { 2011 /** 2012 * Creates a {@code DiscardOldestPolicy} for the given executor. 2013 */ 2014 public DiscardOldestPolicy() { } 2015 2016 /** 2017 * Obtains and ignores the next task that the executor 2018 * would otherwise execute, if one is immediately available, 2019 * and then retries execution of task r, unless the executor 2020 * is shut down, in which case task r is instead discarded. 2021 * 2022 * @param r the runnable task requested to be executed 2023 * @param e the executor attempting to execute this task 2024 */ 2025 public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { 2026 if (!e.isShutdown()) { 2027 e.getQueue().poll(); 2028 e.execute(r); 2029 } 2030 } 2031 } 2032} 2033