Lines Matching defs:to

7  * This software is available to you under a choice of one of two
8 * licenses. You may choose to be licensed under the terms of the GNU
79 * locks. We just need to free packets faster than they arrive, we
92 * has something to do only when the system experiences severe memory
99 * descriptors to be reclaimed by the TX timer.
106 * timer will attempt to refill it.
112 * this. We always want to have room for a maximum sized packet:
130 * Max TX descriptor space we allow for an Ethernet packet to be
143 * Maximum amount of data which we'll ever need to inline into a
153 * in-line room in skb's to accommodate pulling in RX_PULL_LEN bytes
161 * fragments. Should be >= RX_PULL_LEN but possibly bigger to give
189 * SGE also uses the low 4 bits to determine the size of the buffer. It uses
190 * those bits to index into the SGE_FL_BUFFER_SIZE[index] register array.
192 * bits can only contain a 0 or a 1 to indicate which size buffer we're giving
193 * to the SGE. Thus, our software state of "is the buffer mapped for DMA" is
203 * @sdesc: pointer to the software buffer descriptor
215 * @sdesc: pointer to the software buffer descriptor
256 * size because an Egress Queue Index Unit worth of descriptors needs to
269 * Tests specified Free List to see whether the number of buffers
270 * available to the hardware has falled below our "starvation"
279 * map_skb - map an skb for DMA to the device
281 * @skb: the packet to map
282 * @addr: a pointer to the base of the DMA mapping array
284 * Map an skb for DMA to the device and return an array of DMA addresses.
379 * @tq: the TX queue to reclaim descriptors from
380 * @n: the number of descriptors to reclaim
398 * If we kept a reference to the original TX skb, we need to
432 * @tq: the TX queue to reclaim completed descriptors from
447 * Limit the amount of clean up work we do at a time to keep
460 * @sdesc: pointer to the software buffer descriptor
472 * @fl: the SGE Free List to free buffers from
473 * @n: how many buffers to free
476 * buffers must be made inaccessible to hardware before calling this
501 * buffer must be made inaccessible to HW before calling this function.
503 * This is similar to @free_rx_bufs above but does not free the buffer.
504 * Do note that the FL still loses any further access to the buffer.
505 * This is used predominantly to "transfer ownership" of an FL buffer
506 * to another entity (typically an skb's fragment list).
548 * @sdesc: pointer to the softwore RX buffer descriptor
549 * @page: pointer to the page data structure backing the RX buffer
574 * @fl: the Free List ring to refill
575 * @n: the number of new buffers to allocate
578 * (Re)populate an SGE free-buffer queue with up to @n new packet buffers,
596 * won't result in wrapping the SGE's Producer Index around to
602 * If we support large pages, prefer large buffers and fail over to
603 * small pages if we can't allocate large pages to satisfy the refill.
615 * We've failed inour attempt to allocate a "large
616 * page". Fail over to the "small page" allocation
630 * buffer and return with what we've managed to put
631 * into the free list. We don't want to fail over to
684 * Update our accounting state to incorporate the new Free List
686 * buffers which we were able to allocate.
701 * Refill a Free List to its capacity or the Maximum Refill Increment,
745 * pointer to it in *swringp.
781 * boundaries). If N is even, then Length[N+1] should be set to 0 and
785 * somewhat hard to follow but, briefly: the "+2" accounts for the
813 * Returns whether an Ethernet packet is small enough to fit completely as
822 * too much if we ever want to enhace the firmware. It would also
841 * with only immediate data. In that case we just have to have the
849 * Otherwise, we're going to have to construct a Scatter gather list
874 * @start: start offset into skb main-body data to include in the SGL
890 struct ulptx_sge_pair *to;
914 to = (u8 *)end > (u8 *)tq->stat ? buf : sgl->sge;
916 for (i = (nfrags != si->nr_frags); nfrags >= 2; nfrags -= 2, to++) {
917 to->len[0] = cpu_to_be32(skb_frag_size(&si->frags[i]));
918 to->len[1] = cpu_to_be32(skb_frag_size(&si->frags[++i]));
919 to->addr[0] = cpu_to_be64(addr[i]);
920 to->addr[1] = cpu_to_be64(addr[++i]);
923 to->len[0] = cpu_to_be32(skb_frag_size(&si->frags[i]));
924 to->len[1] = cpu_to_be32(0);
925 to->addr[0] = cpu_to_be64(addr[i + 1]);
936 if ((uintptr_t)end & 8) /* 0-pad to multiple of 16 */
944 * @n: number of new descriptors to give to HW
965 * @pos: starting position in the TX queue to inline the packet
970 * in the middle of the packet we want to inline.
990 /* 0-pad to multiple of 16 */
1066 * t4vf_eth_xmit - add a packet to an Ethernet TX queue
1070 * Add a packet to an SGE Ethernet TX queue. Runs with softirqs disabled.
1100 * Figure out which TX Queue we're going to use.
1109 * Take this opportunity to reclaim any TX Descriptors whose DMA
1115 * Calculate the number of flits and TX Descriptors we're going to
1140 * We need to map the skb into PCI DMA space (because it can't
1166 * maximum header size ever exceeds one TX Descriptor, we'll need to
1240 * If there's a VLAN tag present, add that to the list of things to
1278 * message and retain a pointer to the skb so we can free it
1280 * in the Software Descriptor corresponding to the last TX
1286 * the hardware is set up to be lazy about sending DMA
1287 * completion notifications to us and we mostly perform TX
1291 * TX packets arriving to run the destructors of completed
1293 * Sometimes we do not get such new packets causing TX to
1297 * (nor do we want it to) to prevent lengthy stalls. A
1298 * solution to this problem is to run the destructor early,
1300 * that we lie to socket memory accounting, but the amount of
1304 * wait for acks to really free up the data the extra memory
1310 * packet to make sure it doesn't complete and get freed
1321 * ring. If that's the case, wrap around to the beginning
1383 /* get a reference to the last page, we don't own it */
1391 * @pull_len: amount of data to move to the sk_buff's main body
1404 * with enough room to pull in the header and reference the rest of
1409 * PAGE_SIZEd. In this case packets up to RX_COPY_THRES have only one
1499 * Process an ingress ethernet packet and deliver it to the stack.
1579 * there's no effort to make this suspension/resumption process
1585 * unmapped in order to prevent further unmapping attempts. (Effectively
1587 * to create the current packet's gather list.) This leaves us ready to
1609 * rspq_next - advance to the next entry in a response queue
1612 * Updates the state of a response queue to advance it to the next entry.
1626 * @rspq: the ingress response queue to process
1629 * Process responses from a Scatter Gather Engine response queue up to
1635 * long delay to help recovery.
1665 * need to move on to the next Free List buffer.
1670 * first start up a queue so we need to ignore
1713 * Hand the new ingress packet to the handler for
1828 * error and go on to the next response message. This should
1841 * sanity checking to make sure it really refers to one of our
1844 * want to either make them fatal and/or conditionalized under
1862 "Ingress QID %d refers to RSPQ %d\n",
1869 * and move on to the next entry in the Forwarded Interrupt
1918 * Runs periodically from a timer to perform maintenance of SGE RX queues.
1920 * a) Replenishes RX queues that have run out due to memory shortage.
1922 * when out of memory a queue can become empty. We schedule NAPI to do
1935 * to refill it. If we're successful in adding enough buffers to push
1977 * Runs periodically from a timer to perform maintenance of SGE TX queues.
2018 * near future to continue where we left off. Otherwise the next timer
2027 * @rspq: pointer to to the new rxq's Response Queue to be filled in
2031 * @fl: pointer to the new rxq's Free List to be filled in
2032 * @hnd: the interrupt handler to invoke for the rspq
2046 * indirect interrupts to the Forwarded Interrupt Queue. Obviously
2058 * to be a multiple of 16 which includes the mandatory status entry
2071 * on our Linux SGE state that we would end up having to pass tons of
2072 * parameters. We'll have to think about how this might be migrated
2103 * descriptor ring. The free list size needs to be a multiple
2160 /* set offset to -1 to distinguish ingress queues without FL */
2200 * @txq: pointer to the new txq to be filled in
2202 * @iqid: the relative ingress queue ID to which events relating to
2234 * have to see if there's some reasonable way to parameterize it
2386 * this is effective only if measures have been taken to disable any HW
2417 * the Physical Function Driver. Ideally we should be able to deal