1/* 2 * Written by Doug Lea, Bill Scherer, and Michael Scott with 3 * assistance from members of JCP JSR-166 Expert Group and released to 4 * the public domain, as explained at 5 * http://creativecommons.org/publicdomain/zero/1.0/ 6 */ 7 8package java.util.concurrent; 9import java.util.concurrent.locks.LockSupport; 10import java.util.concurrent.locks.ReentrantLock; 11import java.util.*; 12 13// BEGIN android-note 14// removed link to collections framework docs 15// END android-note 16 17/** 18 * A {@linkplain BlockingQueue blocking queue} in which each insert 19 * operation must wait for a corresponding remove operation by another 20 * thread, and vice versa. A synchronous queue does not have any 21 * internal capacity, not even a capacity of one. You cannot 22 * {@code peek} at a synchronous queue because an element is only 23 * present when you try to remove it; you cannot insert an element 24 * (using any method) unless another thread is trying to remove it; 25 * you cannot iterate as there is nothing to iterate. The 26 * <em>head</em> of the queue is the element that the first queued 27 * inserting thread is trying to add to the queue; if there is no such 28 * queued thread then no element is available for removal and 29 * {@code poll()} will return {@code null}. For purposes of other 30 * {@code Collection} methods (for example {@code contains}), a 31 * {@code SynchronousQueue} acts as an empty collection. This queue 32 * does not permit {@code null} elements. 33 * 34 * <p>Synchronous queues are similar to rendezvous channels used in 35 * CSP and Ada. They are well suited for handoff designs, in which an 36 * object running in one thread must sync up with an object running 37 * in another thread in order to hand it some information, event, or 38 * task. 39 * 40 * <p>This class supports an optional fairness policy for ordering 41 * waiting producer and consumer threads. By default, this ordering 42 * is not guaranteed. However, a queue constructed with fairness set 43 * to {@code true} grants threads access in FIFO order. 44 * 45 * <p>This class and its iterator implement all of the 46 * <em>optional</em> methods of the {@link Collection} and {@link 47 * Iterator} interfaces. 48 * 49 * @since 1.5 50 * @author Doug Lea and Bill Scherer and Michael Scott 51 * @param <E> the type of elements held in this collection 52 */ 53public class SynchronousQueue<E> extends AbstractQueue<E> 54 implements BlockingQueue<E>, java.io.Serializable { 55 private static final long serialVersionUID = -3223113410248163686L; 56 57 /* 58 * This class implements extensions of the dual stack and dual 59 * queue algorithms described in "Nonblocking Concurrent Objects 60 * with Condition Synchronization", by W. N. Scherer III and 61 * M. L. Scott. 18th Annual Conf. on Distributed Computing, 62 * Oct. 2004 (see also 63 * http://www.cs.rochester.edu/u/scott/synchronization/pseudocode/duals.html). 64 * The (Lifo) stack is used for non-fair mode, and the (Fifo) 65 * queue for fair mode. The performance of the two is generally 66 * similar. Fifo usually supports higher throughput under 67 * contention but Lifo maintains higher thread locality in common 68 * applications. 69 * 70 * A dual queue (and similarly stack) is one that at any given 71 * time either holds "data" -- items provided by put operations, 72 * or "requests" -- slots representing take operations, or is 73 * empty. A call to "fulfill" (i.e., a call requesting an item 74 * from a queue holding data or vice versa) dequeues a 75 * complementary node. The most interesting feature of these 76 * queues is that any operation can figure out which mode the 77 * queue is in, and act accordingly without needing locks. 78 * 79 * Both the queue and stack extend abstract class Transferer 80 * defining the single method transfer that does a put or a 81 * take. These are unified into a single method because in dual 82 * data structures, the put and take operations are symmetrical, 83 * so nearly all code can be combined. The resulting transfer 84 * methods are on the long side, but are easier to follow than 85 * they would be if broken up into nearly-duplicated parts. 86 * 87 * The queue and stack data structures share many conceptual 88 * similarities but very few concrete details. For simplicity, 89 * they are kept distinct so that they can later evolve 90 * separately. 91 * 92 * The algorithms here differ from the versions in the above paper 93 * in extending them for use in synchronous queues, as well as 94 * dealing with cancellation. The main differences include: 95 * 96 * 1. The original algorithms used bit-marked pointers, but 97 * the ones here use mode bits in nodes, leading to a number 98 * of further adaptations. 99 * 2. SynchronousQueues must block threads waiting to become 100 * fulfilled. 101 * 3. Support for cancellation via timeout and interrupts, 102 * including cleaning out cancelled nodes/threads 103 * from lists to avoid garbage retention and memory depletion. 104 * 105 * Blocking is mainly accomplished using LockSupport park/unpark, 106 * except that nodes that appear to be the next ones to become 107 * fulfilled first spin a bit (on multiprocessors only). On very 108 * busy synchronous queues, spinning can dramatically improve 109 * throughput. And on less busy ones, the amount of spinning is 110 * small enough not to be noticeable. 111 * 112 * Cleaning is done in different ways in queues vs stacks. For 113 * queues, we can almost always remove a node immediately in O(1) 114 * time (modulo retries for consistency checks) when it is 115 * cancelled. But if it may be pinned as the current tail, it must 116 * wait until some subsequent cancellation. For stacks, we need a 117 * potentially O(n) traversal to be sure that we can remove the 118 * node, but this can run concurrently with other threads 119 * accessing the stack. 120 * 121 * While garbage collection takes care of most node reclamation 122 * issues that otherwise complicate nonblocking algorithms, care 123 * is taken to "forget" references to data, other nodes, and 124 * threads that might be held on to long-term by blocked 125 * threads. In cases where setting to null would otherwise 126 * conflict with main algorithms, this is done by changing a 127 * node's link to now point to the node itself. This doesn't arise 128 * much for Stack nodes (because blocked threads do not hang on to 129 * old head pointers), but references in Queue nodes must be 130 * aggressively forgotten to avoid reachability of everything any 131 * node has ever referred to since arrival. 132 */ 133 134 /** 135 * Shared internal API for dual stacks and queues. 136 */ 137 abstract static class Transferer<E> { 138 /** 139 * Performs a put or take. 140 * 141 * @param e if non-null, the item to be handed to a consumer; 142 * if null, requests that transfer return an item 143 * offered by producer. 144 * @param timed if this operation should timeout 145 * @param nanos the timeout, in nanoseconds 146 * @return if non-null, the item provided or received; if null, 147 * the operation failed due to timeout or interrupt -- 148 * the caller can distinguish which of these occurred 149 * by checking Thread.interrupted. 150 */ 151 abstract E transfer(E e, boolean timed, long nanos); 152 } 153 154 /** The number of CPUs, for spin control */ 155 static final int NCPUS = Runtime.getRuntime().availableProcessors(); 156 157 /** 158 * The number of times to spin before blocking in timed waits. 159 * The value is empirically derived -- it works well across a 160 * variety of processors and OSes. Empirically, the best value 161 * seems not to vary with number of CPUs (beyond 2) so is just 162 * a constant. 163 */ 164 static final int maxTimedSpins = (NCPUS < 2) ? 0 : 32; 165 166 /** 167 * The number of times to spin before blocking in untimed waits. 168 * This is greater than timed value because untimed waits spin 169 * faster since they don't need to check times on each spin. 170 */ 171 static final int maxUntimedSpins = maxTimedSpins * 16; 172 173 /** 174 * The number of nanoseconds for which it is faster to spin 175 * rather than to use timed park. A rough estimate suffices. 176 */ 177 static final long spinForTimeoutThreshold = 1000L; 178 179 /** Dual stack */ 180 static final class TransferStack<E> extends Transferer<E> { 181 /* 182 * This extends Scherer-Scott dual stack algorithm, differing, 183 * among other ways, by using "covering" nodes rather than 184 * bit-marked pointers: Fulfilling operations push on marker 185 * nodes (with FULFILLING bit set in mode) to reserve a spot 186 * to match a waiting node. 187 */ 188 189 /* Modes for SNodes, ORed together in node fields */ 190 /** Node represents an unfulfilled consumer */ 191 static final int REQUEST = 0; 192 /** Node represents an unfulfilled producer */ 193 static final int DATA = 1; 194 /** Node is fulfilling another unfulfilled DATA or REQUEST */ 195 static final int FULFILLING = 2; 196 197 /** Returns true if m has fulfilling bit set. */ 198 static boolean isFulfilling(int m) { return (m & FULFILLING) != 0; } 199 200 /** Node class for TransferStacks. */ 201 static final class SNode { 202 volatile SNode next; // next node in stack 203 volatile SNode match; // the node matched to this 204 volatile Thread waiter; // to control park/unpark 205 Object item; // data; or null for REQUESTs 206 int mode; 207 // Note: item and mode fields don't need to be volatile 208 // since they are always written before, and read after, 209 // other volatile/atomic operations. 210 211 SNode(Object item) { 212 this.item = item; 213 } 214 215 boolean casNext(SNode cmp, SNode val) { 216 return cmp == next && 217 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val); 218 } 219 220 /** 221 * Tries to match node s to this node, if so, waking up thread. 222 * Fulfillers call tryMatch to identify their waiters. 223 * Waiters block until they have been matched. 224 * 225 * @param s the node to match 226 * @return true if successfully matched to s 227 */ 228 boolean tryMatch(SNode s) { 229 if (match == null && 230 UNSAFE.compareAndSwapObject(this, matchOffset, null, s)) { 231 Thread w = waiter; 232 if (w != null) { // waiters need at most one unpark 233 waiter = null; 234 LockSupport.unpark(w); 235 } 236 return true; 237 } 238 return match == s; 239 } 240 241 /** 242 * Tries to cancel a wait by matching node to itself. 243 */ 244 void tryCancel() { 245 UNSAFE.compareAndSwapObject(this, matchOffset, null, this); 246 } 247 248 boolean isCancelled() { 249 return match == this; 250 } 251 252 // Unsafe mechanics 253 private static final sun.misc.Unsafe UNSAFE; 254 private static final long matchOffset; 255 private static final long nextOffset; 256 257 static { 258 try { 259 UNSAFE = sun.misc.Unsafe.getUnsafe(); 260 Class<?> k = SNode.class; 261 matchOffset = UNSAFE.objectFieldOffset 262 (k.getDeclaredField("match")); 263 nextOffset = UNSAFE.objectFieldOffset 264 (k.getDeclaredField("next")); 265 } catch (Exception e) { 266 throw new Error(e); 267 } 268 } 269 } 270 271 /** The head (top) of the stack */ 272 volatile SNode head; 273 274 boolean casHead(SNode h, SNode nh) { 275 return h == head && 276 UNSAFE.compareAndSwapObject(this, headOffset, h, nh); 277 } 278 279 /** 280 * Creates or resets fields of a node. Called only from transfer 281 * where the node to push on stack is lazily created and 282 * reused when possible to help reduce intervals between reads 283 * and CASes of head and to avoid surges of garbage when CASes 284 * to push nodes fail due to contention. 285 */ 286 static SNode snode(SNode s, Object e, SNode next, int mode) { 287 if (s == null) s = new SNode(e); 288 s.mode = mode; 289 s.next = next; 290 return s; 291 } 292 293 /** 294 * Puts or takes an item. 295 */ 296 @SuppressWarnings("unchecked") 297 E transfer(E e, boolean timed, long nanos) { 298 /* 299 * Basic algorithm is to loop trying one of three actions: 300 * 301 * 1. If apparently empty or already containing nodes of same 302 * mode, try to push node on stack and wait for a match, 303 * returning it, or null if cancelled. 304 * 305 * 2. If apparently containing node of complementary mode, 306 * try to push a fulfilling node on to stack, match 307 * with corresponding waiting node, pop both from 308 * stack, and return matched item. The matching or 309 * unlinking might not actually be necessary because of 310 * other threads performing action 3: 311 * 312 * 3. If top of stack already holds another fulfilling node, 313 * help it out by doing its match and/or pop 314 * operations, and then continue. The code for helping 315 * is essentially the same as for fulfilling, except 316 * that it doesn't return the item. 317 */ 318 319 SNode s = null; // constructed/reused as needed 320 int mode = (e == null) ? REQUEST : DATA; 321 322 for (;;) { 323 SNode h = head; 324 if (h == null || h.mode == mode) { // empty or same-mode 325 if (timed && nanos <= 0) { // can't wait 326 if (h != null && h.isCancelled()) 327 casHead(h, h.next); // pop cancelled node 328 else 329 return null; 330 } else if (casHead(h, s = snode(s, e, h, mode))) { 331 SNode m = awaitFulfill(s, timed, nanos); 332 if (m == s) { // wait was cancelled 333 clean(s); 334 return null; 335 } 336 if ((h = head) != null && h.next == s) 337 casHead(h, s.next); // help s's fulfiller 338 return (E) ((mode == REQUEST) ? m.item : s.item); 339 } 340 } else if (!isFulfilling(h.mode)) { // try to fulfill 341 if (h.isCancelled()) // already cancelled 342 casHead(h, h.next); // pop and retry 343 else if (casHead(h, s=snode(s, e, h, FULFILLING|mode))) { 344 for (;;) { // loop until matched or waiters disappear 345 SNode m = s.next; // m is s's match 346 if (m == null) { // all waiters are gone 347 casHead(s, null); // pop fulfill node 348 s = null; // use new node next time 349 break; // restart main loop 350 } 351 SNode mn = m.next; 352 if (m.tryMatch(s)) { 353 casHead(s, mn); // pop both s and m 354 return (E) ((mode == REQUEST) ? m.item : s.item); 355 } else // lost match 356 s.casNext(m, mn); // help unlink 357 } 358 } 359 } else { // help a fulfiller 360 SNode m = h.next; // m is h's match 361 if (m == null) // waiter is gone 362 casHead(h, null); // pop fulfilling node 363 else { 364 SNode mn = m.next; 365 if (m.tryMatch(h)) // help match 366 casHead(h, mn); // pop both h and m 367 else // lost match 368 h.casNext(m, mn); // help unlink 369 } 370 } 371 } 372 } 373 374 /** 375 * Spins/blocks until node s is matched by a fulfill operation. 376 * 377 * @param s the waiting node 378 * @param timed true if timed wait 379 * @param nanos timeout value 380 * @return matched node, or s if cancelled 381 */ 382 SNode awaitFulfill(SNode s, boolean timed, long nanos) { 383 /* 384 * When a node/thread is about to block, it sets its waiter 385 * field and then rechecks state at least one more time 386 * before actually parking, thus covering race vs 387 * fulfiller noticing that waiter is non-null so should be 388 * woken. 389 * 390 * When invoked by nodes that appear at the point of call 391 * to be at the head of the stack, calls to park are 392 * preceded by spins to avoid blocking when producers and 393 * consumers are arriving very close in time. This can 394 * happen enough to bother only on multiprocessors. 395 * 396 * The order of checks for returning out of main loop 397 * reflects fact that interrupts have precedence over 398 * normal returns, which have precedence over 399 * timeouts. (So, on timeout, one last check for match is 400 * done before giving up.) Except that calls from untimed 401 * SynchronousQueue.{poll/offer} don't check interrupts 402 * and don't wait at all, so are trapped in transfer 403 * method rather than calling awaitFulfill. 404 */ 405 final long deadline = timed ? System.nanoTime() + nanos : 0L; 406 Thread w = Thread.currentThread(); 407 int spins = (shouldSpin(s) ? 408 (timed ? maxTimedSpins : maxUntimedSpins) : 0); 409 for (;;) { 410 if (w.isInterrupted()) 411 s.tryCancel(); 412 SNode m = s.match; 413 if (m != null) 414 return m; 415 if (timed) { 416 nanos = deadline - System.nanoTime(); 417 if (nanos <= 0L) { 418 s.tryCancel(); 419 continue; 420 } 421 } 422 if (spins > 0) 423 spins = shouldSpin(s) ? (spins-1) : 0; 424 else if (s.waiter == null) 425 s.waiter = w; // establish waiter so can park next iter 426 else if (!timed) 427 LockSupport.park(this); 428 else if (nanos > spinForTimeoutThreshold) 429 LockSupport.parkNanos(this, nanos); 430 } 431 } 432 433 /** 434 * Returns true if node s is at head or there is an active 435 * fulfiller. 436 */ 437 boolean shouldSpin(SNode s) { 438 SNode h = head; 439 return (h == s || h == null || isFulfilling(h.mode)); 440 } 441 442 /** 443 * Unlinks s from the stack. 444 */ 445 void clean(SNode s) { 446 s.item = null; // forget item 447 s.waiter = null; // forget thread 448 449 /* 450 * At worst we may need to traverse entire stack to unlink 451 * s. If there are multiple concurrent calls to clean, we 452 * might not see s if another thread has already removed 453 * it. But we can stop when we see any node known to 454 * follow s. We use s.next unless it too is cancelled, in 455 * which case we try the node one past. We don't check any 456 * further because we don't want to doubly traverse just to 457 * find sentinel. 458 */ 459 460 SNode past = s.next; 461 if (past != null && past.isCancelled()) 462 past = past.next; 463 464 // Absorb cancelled nodes at head 465 SNode p; 466 while ((p = head) != null && p != past && p.isCancelled()) 467 casHead(p, p.next); 468 469 // Unsplice embedded nodes 470 while (p != null && p != past) { 471 SNode n = p.next; 472 if (n != null && n.isCancelled()) 473 p.casNext(n, n.next); 474 else 475 p = n; 476 } 477 } 478 479 // Unsafe mechanics 480 private static final sun.misc.Unsafe UNSAFE; 481 private static final long headOffset; 482 static { 483 try { 484 UNSAFE = sun.misc.Unsafe.getUnsafe(); 485 Class<?> k = TransferStack.class; 486 headOffset = UNSAFE.objectFieldOffset 487 (k.getDeclaredField("head")); 488 } catch (Exception e) { 489 throw new Error(e); 490 } 491 } 492 } 493 494 /** Dual Queue */ 495 static final class TransferQueue<E> extends Transferer<E> { 496 /* 497 * This extends Scherer-Scott dual queue algorithm, differing, 498 * among other ways, by using modes within nodes rather than 499 * marked pointers. The algorithm is a little simpler than 500 * that for stacks because fulfillers do not need explicit 501 * nodes, and matching is done by CAS'ing QNode.item field 502 * from non-null to null (for put) or vice versa (for take). 503 */ 504 505 /** Node class for TransferQueue. */ 506 static final class QNode { 507 volatile QNode next; // next node in queue 508 volatile Object item; // CAS'ed to or from null 509 volatile Thread waiter; // to control park/unpark 510 final boolean isData; 511 512 QNode(Object item, boolean isData) { 513 this.item = item; 514 this.isData = isData; 515 } 516 517 boolean casNext(QNode cmp, QNode val) { 518 return next == cmp && 519 UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val); 520 } 521 522 boolean casItem(Object cmp, Object val) { 523 return item == cmp && 524 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, val); 525 } 526 527 /** 528 * Tries to cancel by CAS'ing ref to this as item. 529 */ 530 void tryCancel(Object cmp) { 531 UNSAFE.compareAndSwapObject(this, itemOffset, cmp, this); 532 } 533 534 boolean isCancelled() { 535 return item == this; 536 } 537 538 /** 539 * Returns true if this node is known to be off the queue 540 * because its next pointer has been forgotten due to 541 * an advanceHead operation. 542 */ 543 boolean isOffList() { 544 return next == this; 545 } 546 547 // Unsafe mechanics 548 private static final sun.misc.Unsafe UNSAFE; 549 private static final long itemOffset; 550 private static final long nextOffset; 551 552 static { 553 try { 554 UNSAFE = sun.misc.Unsafe.getUnsafe(); 555 Class<?> k = QNode.class; 556 itemOffset = UNSAFE.objectFieldOffset 557 (k.getDeclaredField("item")); 558 nextOffset = UNSAFE.objectFieldOffset 559 (k.getDeclaredField("next")); 560 } catch (Exception e) { 561 throw new Error(e); 562 } 563 } 564 } 565 566 /** Head of queue */ 567 transient volatile QNode head; 568 /** Tail of queue */ 569 transient volatile QNode tail; 570 /** 571 * Reference to a cancelled node that might not yet have been 572 * unlinked from queue because it was the last inserted node 573 * when it was cancelled. 574 */ 575 transient volatile QNode cleanMe; 576 577 TransferQueue() { 578 QNode h = new QNode(null, false); // initialize to dummy node. 579 head = h; 580 tail = h; 581 } 582 583 /** 584 * Tries to cas nh as new head; if successful, unlink 585 * old head's next node to avoid garbage retention. 586 */ 587 void advanceHead(QNode h, QNode nh) { 588 if (h == head && 589 UNSAFE.compareAndSwapObject(this, headOffset, h, nh)) 590 h.next = h; // forget old next 591 } 592 593 /** 594 * Tries to cas nt as new tail. 595 */ 596 void advanceTail(QNode t, QNode nt) { 597 if (tail == t) 598 UNSAFE.compareAndSwapObject(this, tailOffset, t, nt); 599 } 600 601 /** 602 * Tries to CAS cleanMe slot. 603 */ 604 boolean casCleanMe(QNode cmp, QNode val) { 605 return cleanMe == cmp && 606 UNSAFE.compareAndSwapObject(this, cleanMeOffset, cmp, val); 607 } 608 609 /** 610 * Puts or takes an item. 611 */ 612 @SuppressWarnings("unchecked") 613 E transfer(E e, boolean timed, long nanos) { 614 /* Basic algorithm is to loop trying to take either of 615 * two actions: 616 * 617 * 1. If queue apparently empty or holding same-mode nodes, 618 * try to add node to queue of waiters, wait to be 619 * fulfilled (or cancelled) and return matching item. 620 * 621 * 2. If queue apparently contains waiting items, and this 622 * call is of complementary mode, try to fulfill by CAS'ing 623 * item field of waiting node and dequeuing it, and then 624 * returning matching item. 625 * 626 * In each case, along the way, check for and try to help 627 * advance head and tail on behalf of other stalled/slow 628 * threads. 629 * 630 * The loop starts off with a null check guarding against 631 * seeing uninitialized head or tail values. This never 632 * happens in current SynchronousQueue, but could if 633 * callers held non-volatile/final ref to the 634 * transferer. The check is here anyway because it places 635 * null checks at top of loop, which is usually faster 636 * than having them implicitly interspersed. 637 */ 638 639 QNode s = null; // constructed/reused as needed 640 boolean isData = (e != null); 641 642 for (;;) { 643 QNode t = tail; 644 QNode h = head; 645 if (t == null || h == null) // saw uninitialized value 646 continue; // spin 647 648 if (h == t || t.isData == isData) { // empty or same-mode 649 QNode tn = t.next; 650 if (t != tail) // inconsistent read 651 continue; 652 if (tn != null) { // lagging tail 653 advanceTail(t, tn); 654 continue; 655 } 656 if (timed && nanos <= 0) // can't wait 657 return null; 658 if (s == null) 659 s = new QNode(e, isData); 660 if (!t.casNext(null, s)) // failed to link in 661 continue; 662 663 advanceTail(t, s); // swing tail and wait 664 Object x = awaitFulfill(s, e, timed, nanos); 665 if (x == s) { // wait was cancelled 666 clean(t, s); 667 return null; 668 } 669 670 if (!s.isOffList()) { // not already unlinked 671 advanceHead(t, s); // unlink if head 672 if (x != null) // and forget fields 673 s.item = s; 674 s.waiter = null; 675 } 676 return (x != null) ? (E)x : e; 677 678 } else { // complementary-mode 679 QNode m = h.next; // node to fulfill 680 if (t != tail || m == null || h != head) 681 continue; // inconsistent read 682 683 Object x = m.item; 684 if (isData == (x != null) || // m already fulfilled 685 x == m || // m cancelled 686 !m.casItem(x, e)) { // lost CAS 687 advanceHead(h, m); // dequeue and retry 688 continue; 689 } 690 691 advanceHead(h, m); // successfully fulfilled 692 LockSupport.unpark(m.waiter); 693 return (x != null) ? (E)x : e; 694 } 695 } 696 } 697 698 /** 699 * Spins/blocks until node s is fulfilled. 700 * 701 * @param s the waiting node 702 * @param e the comparison value for checking match 703 * @param timed true if timed wait 704 * @param nanos timeout value 705 * @return matched item, or s if cancelled 706 */ 707 Object awaitFulfill(QNode s, E e, boolean timed, long nanos) { 708 /* Same idea as TransferStack.awaitFulfill */ 709 final long deadline = timed ? System.nanoTime() + nanos : 0L; 710 Thread w = Thread.currentThread(); 711 int spins = ((head.next == s) ? 712 (timed ? maxTimedSpins : maxUntimedSpins) : 0); 713 for (;;) { 714 if (w.isInterrupted()) 715 s.tryCancel(e); 716 Object x = s.item; 717 if (x != e) 718 return x; 719 if (timed) { 720 nanos = deadline - System.nanoTime(); 721 if (nanos <= 0L) { 722 s.tryCancel(e); 723 continue; 724 } 725 } 726 if (spins > 0) 727 --spins; 728 else if (s.waiter == null) 729 s.waiter = w; 730 else if (!timed) 731 LockSupport.park(this); 732 else if (nanos > spinForTimeoutThreshold) 733 LockSupport.parkNanos(this, nanos); 734 } 735 } 736 737 /** 738 * Gets rid of cancelled node s with original predecessor pred. 739 */ 740 void clean(QNode pred, QNode s) { 741 s.waiter = null; // forget thread 742 /* 743 * At any given time, exactly one node on list cannot be 744 * deleted -- the last inserted node. To accommodate this, 745 * if we cannot delete s, we save its predecessor as 746 * "cleanMe", deleting the previously saved version 747 * first. At least one of node s or the node previously 748 * saved can always be deleted, so this always terminates. 749 */ 750 while (pred.next == s) { // Return early if already unlinked 751 QNode h = head; 752 QNode hn = h.next; // Absorb cancelled first node as head 753 if (hn != null && hn.isCancelled()) { 754 advanceHead(h, hn); 755 continue; 756 } 757 QNode t = tail; // Ensure consistent read for tail 758 if (t == h) 759 return; 760 QNode tn = t.next; 761 if (t != tail) 762 continue; 763 if (tn != null) { 764 advanceTail(t, tn); 765 continue; 766 } 767 if (s != t) { // If not tail, try to unsplice 768 QNode sn = s.next; 769 if (sn == s || pred.casNext(s, sn)) 770 return; 771 } 772 QNode dp = cleanMe; 773 if (dp != null) { // Try unlinking previous cancelled node 774 QNode d = dp.next; 775 QNode dn; 776 if (d == null || // d is gone or 777 d == dp || // d is off list or 778 !d.isCancelled() || // d not cancelled or 779 (d != t && // d not tail and 780 (dn = d.next) != null && // has successor 781 dn != d && // that is on list 782 dp.casNext(d, dn))) // d unspliced 783 casCleanMe(dp, null); 784 if (dp == pred) 785 return; // s is already saved node 786 } else if (casCleanMe(null, pred)) 787 return; // Postpone cleaning s 788 } 789 } 790 791 private static final sun.misc.Unsafe UNSAFE; 792 private static final long headOffset; 793 private static final long tailOffset; 794 private static final long cleanMeOffset; 795 static { 796 try { 797 UNSAFE = sun.misc.Unsafe.getUnsafe(); 798 Class<?> k = TransferQueue.class; 799 headOffset = UNSAFE.objectFieldOffset 800 (k.getDeclaredField("head")); 801 tailOffset = UNSAFE.objectFieldOffset 802 (k.getDeclaredField("tail")); 803 cleanMeOffset = UNSAFE.objectFieldOffset 804 (k.getDeclaredField("cleanMe")); 805 } catch (Exception e) { 806 throw new Error(e); 807 } 808 } 809 } 810 811 /** 812 * The transferer. Set only in constructor, but cannot be declared 813 * as final without further complicating serialization. Since 814 * this is accessed only at most once per public method, there 815 * isn't a noticeable performance penalty for using volatile 816 * instead of final here. 817 */ 818 private transient volatile Transferer<E> transferer; 819 820 /** 821 * Creates a {@code SynchronousQueue} with nonfair access policy. 822 */ 823 public SynchronousQueue() { 824 this(false); 825 } 826 827 /** 828 * Creates a {@code SynchronousQueue} with the specified fairness policy. 829 * 830 * @param fair if true, waiting threads contend in FIFO order for 831 * access; otherwise the order is unspecified. 832 */ 833 public SynchronousQueue(boolean fair) { 834 transferer = fair ? new TransferQueue<E>() : new TransferStack<E>(); 835 } 836 837 /** 838 * Adds the specified element to this queue, waiting if necessary for 839 * another thread to receive it. 840 * 841 * @throws InterruptedException {@inheritDoc} 842 * @throws NullPointerException {@inheritDoc} 843 */ 844 public void put(E e) throws InterruptedException { 845 if (e == null) throw new NullPointerException(); 846 if (transferer.transfer(e, false, 0) == null) { 847 Thread.interrupted(); 848 throw new InterruptedException(); 849 } 850 } 851 852 /** 853 * Inserts the specified element into this queue, waiting if necessary 854 * up to the specified wait time for another thread to receive it. 855 * 856 * @return {@code true} if successful, or {@code false} if the 857 * specified waiting time elapses before a consumer appears 858 * @throws InterruptedException {@inheritDoc} 859 * @throws NullPointerException {@inheritDoc} 860 */ 861 public boolean offer(E e, long timeout, TimeUnit unit) 862 throws InterruptedException { 863 if (e == null) throw new NullPointerException(); 864 if (transferer.transfer(e, true, unit.toNanos(timeout)) != null) 865 return true; 866 if (!Thread.interrupted()) 867 return false; 868 throw new InterruptedException(); 869 } 870 871 /** 872 * Inserts the specified element into this queue, if another thread is 873 * waiting to receive it. 874 * 875 * @param e the element to add 876 * @return {@code true} if the element was added to this queue, else 877 * {@code false} 878 * @throws NullPointerException if the specified element is null 879 */ 880 public boolean offer(E e) { 881 if (e == null) throw new NullPointerException(); 882 return transferer.transfer(e, true, 0) != null; 883 } 884 885 /** 886 * Retrieves and removes the head of this queue, waiting if necessary 887 * for another thread to insert it. 888 * 889 * @return the head of this queue 890 * @throws InterruptedException {@inheritDoc} 891 */ 892 public E take() throws InterruptedException { 893 E e = transferer.transfer(null, false, 0); 894 if (e != null) 895 return e; 896 Thread.interrupted(); 897 throw new InterruptedException(); 898 } 899 900 /** 901 * Retrieves and removes the head of this queue, waiting 902 * if necessary up to the specified wait time, for another thread 903 * to insert it. 904 * 905 * @return the head of this queue, or {@code null} if the 906 * specified waiting time elapses before an element is present 907 * @throws InterruptedException {@inheritDoc} 908 */ 909 public E poll(long timeout, TimeUnit unit) throws InterruptedException { 910 E e = transferer.transfer(null, true, unit.toNanos(timeout)); 911 if (e != null || !Thread.interrupted()) 912 return e; 913 throw new InterruptedException(); 914 } 915 916 /** 917 * Retrieves and removes the head of this queue, if another thread 918 * is currently making an element available. 919 * 920 * @return the head of this queue, or {@code null} if no 921 * element is available 922 */ 923 public E poll() { 924 return transferer.transfer(null, true, 0); 925 } 926 927 /** 928 * Always returns {@code true}. 929 * A {@code SynchronousQueue} has no internal capacity. 930 * 931 * @return {@code true} 932 */ 933 public boolean isEmpty() { 934 return true; 935 } 936 937 /** 938 * Always returns zero. 939 * A {@code SynchronousQueue} has no internal capacity. 940 * 941 * @return zero 942 */ 943 public int size() { 944 return 0; 945 } 946 947 /** 948 * Always returns zero. 949 * A {@code SynchronousQueue} has no internal capacity. 950 * 951 * @return zero 952 */ 953 public int remainingCapacity() { 954 return 0; 955 } 956 957 /** 958 * Does nothing. 959 * A {@code SynchronousQueue} has no internal capacity. 960 */ 961 public void clear() { 962 } 963 964 /** 965 * Always returns {@code false}. 966 * A {@code SynchronousQueue} has no internal capacity. 967 * 968 * @param o the element 969 * @return {@code false} 970 */ 971 public boolean contains(Object o) { 972 return false; 973 } 974 975 /** 976 * Always returns {@code false}. 977 * A {@code SynchronousQueue} has no internal capacity. 978 * 979 * @param o the element to remove 980 * @return {@code false} 981 */ 982 public boolean remove(Object o) { 983 return false; 984 } 985 986 /** 987 * Returns {@code false} unless the given collection is empty. 988 * A {@code SynchronousQueue} has no internal capacity. 989 * 990 * @param c the collection 991 * @return {@code false} unless given collection is empty 992 */ 993 public boolean containsAll(Collection<?> c) { 994 return c.isEmpty(); 995 } 996 997 /** 998 * Always returns {@code false}. 999 * A {@code SynchronousQueue} has no internal capacity. 1000 * 1001 * @param c the collection 1002 * @return {@code false} 1003 */ 1004 public boolean removeAll(Collection<?> c) { 1005 return false; 1006 } 1007 1008 /** 1009 * Always returns {@code false}. 1010 * A {@code SynchronousQueue} has no internal capacity. 1011 * 1012 * @param c the collection 1013 * @return {@code false} 1014 */ 1015 public boolean retainAll(Collection<?> c) { 1016 return false; 1017 } 1018 1019 /** 1020 * Always returns {@code null}. 1021 * A {@code SynchronousQueue} does not return elements 1022 * unless actively waited on. 1023 * 1024 * @return {@code null} 1025 */ 1026 public E peek() { 1027 return null; 1028 } 1029 1030 /** 1031 * Returns an empty iterator in which {@code hasNext} always returns 1032 * {@code false}. 1033 * 1034 * @return an empty iterator 1035 */ 1036 @SuppressWarnings("unchecked") 1037 public Iterator<E> iterator() { 1038 return (Iterator<E>) EmptyIterator.EMPTY_ITERATOR; 1039 } 1040 1041 // Replicated from a previous version of Collections 1042 private static class EmptyIterator<E> implements Iterator<E> { 1043 static final EmptyIterator<Object> EMPTY_ITERATOR 1044 = new EmptyIterator<Object>(); 1045 1046 public boolean hasNext() { return false; } 1047 public E next() { throw new NoSuchElementException(); } 1048 public void remove() { throw new IllegalStateException(); } 1049 } 1050 1051 /** 1052 * Returns a zero-length array. 1053 * @return a zero-length array 1054 */ 1055 public Object[] toArray() { 1056 return new Object[0]; 1057 } 1058 1059 /** 1060 * Sets the zeroeth element of the specified array to {@code null} 1061 * (if the array has non-zero length) and returns it. 1062 * 1063 * @param a the array 1064 * @return the specified array 1065 * @throws NullPointerException if the specified array is null 1066 */ 1067 public <T> T[] toArray(T[] a) { 1068 if (a.length > 0) 1069 a[0] = null; 1070 return a; 1071 } 1072 1073 /** 1074 * @throws UnsupportedOperationException {@inheritDoc} 1075 * @throws ClassCastException {@inheritDoc} 1076 * @throws NullPointerException {@inheritDoc} 1077 * @throws IllegalArgumentException {@inheritDoc} 1078 */ 1079 public int drainTo(Collection<? super E> c) { 1080 if (c == null) 1081 throw new NullPointerException(); 1082 if (c == this) 1083 throw new IllegalArgumentException(); 1084 int n = 0; 1085 for (E e; (e = poll()) != null;) { 1086 c.add(e); 1087 ++n; 1088 } 1089 return n; 1090 } 1091 1092 /** 1093 * @throws UnsupportedOperationException {@inheritDoc} 1094 * @throws ClassCastException {@inheritDoc} 1095 * @throws NullPointerException {@inheritDoc} 1096 * @throws IllegalArgumentException {@inheritDoc} 1097 */ 1098 public int drainTo(Collection<? super E> c, int maxElements) { 1099 if (c == null) 1100 throw new NullPointerException(); 1101 if (c == this) 1102 throw new IllegalArgumentException(); 1103 int n = 0; 1104 for (E e; n < maxElements && (e = poll()) != null;) { 1105 c.add(e); 1106 ++n; 1107 } 1108 return n; 1109 } 1110 1111 /* 1112 * To cope with serialization strategy in the 1.5 version of 1113 * SynchronousQueue, we declare some unused classes and fields 1114 * that exist solely to enable serializability across versions. 1115 * These fields are never used, so are initialized only if this 1116 * object is ever serialized or deserialized. 1117 */ 1118 1119 @SuppressWarnings("serial") 1120 static class WaitQueue implements java.io.Serializable { } 1121 static class LifoWaitQueue extends WaitQueue { 1122 private static final long serialVersionUID = -3633113410248163686L; 1123 } 1124 static class FifoWaitQueue extends WaitQueue { 1125 private static final long serialVersionUID = -3623113410248163686L; 1126 } 1127 private ReentrantLock qlock; 1128 private WaitQueue waitingProducers; 1129 private WaitQueue waitingConsumers; 1130 1131 /** 1132 * Saves this queue to a stream (that is, serializes it). 1133 */ 1134 private void writeObject(java.io.ObjectOutputStream s) 1135 throws java.io.IOException { 1136 boolean fair = transferer instanceof TransferQueue; 1137 if (fair) { 1138 qlock = new ReentrantLock(true); 1139 waitingProducers = new FifoWaitQueue(); 1140 waitingConsumers = new FifoWaitQueue(); 1141 } 1142 else { 1143 qlock = new ReentrantLock(); 1144 waitingProducers = new LifoWaitQueue(); 1145 waitingConsumers = new LifoWaitQueue(); 1146 } 1147 s.defaultWriteObject(); 1148 } 1149 1150 /** 1151 * Reconstitutes this queue from a stream (that is, deserializes it). 1152 */ 1153 private void readObject(final java.io.ObjectInputStream s) 1154 throws java.io.IOException, ClassNotFoundException { 1155 s.defaultReadObject(); 1156 if (waitingProducers instanceof FifoWaitQueue) 1157 transferer = new TransferQueue<E>(); 1158 else 1159 transferer = new TransferStack<E>(); 1160 } 1161 1162 // Unsafe mechanics 1163 static long objectFieldOffset(sun.misc.Unsafe UNSAFE, 1164 String field, Class<?> klazz) { 1165 try { 1166 return UNSAFE.objectFieldOffset(klazz.getDeclaredField(field)); 1167 } catch (NoSuchFieldException e) { 1168 // Convert Exception to corresponding Error 1169 NoSuchFieldError error = new NoSuchFieldError(field); 1170 error.initCause(e); 1171 throw error; 1172 } 1173 } 1174 1175} 1176