History log of /net/sched/sch_tbf.c
Revision Date Author Comments
f2600cf02b5b59aaee082c3485b7f01fc7f7b70c 04-Oct-2014 Eric Dumazet <edumazet@google.com> net: sched: avoid costly atomic operation in fq_dequeue()

Standard qdisc API to setup a timer implies an atomic operation on every
packet dequeue : qdisc_unthrottled()

It turns out this is not really needed for FQ, as FQ has no concept of
global qdisc throttling, being a qdisc handling many different flows,
some of them can be throttled, while others are not.

Fix is straightforward : add a 'bool throttle' to
qdisc_watchdog_schedule_ns(), and remove calls to qdisc_unthrottled()
in sch_fq.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
25331d6ce42bcf4b34b6705fce4da15c3fabe62f 28-Sep-2014 John Fastabend <john.fastabend@gmail.com> net: sched: implement qstat helper routines

This adds helpers to manipulate qstats logic and replaces locations
that touch the counters directly. This simplifies future patches
to push qstats onto per cpu counters.

Signed-off-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
d2de875c6d4cbec8a99c880160181a3ed5b9992e 23-Aug-2014 Eric Dumazet <edumazet@google.com> net: use ktime_get_ns() and ktime_get_real_ns() helpers

ktime_get_ns() replaces ktime_to_ns(ktime_get())

ktime_get_real_ns() replaces ktime_to_ns(ktime_get_real())

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
d59b7d8059ddc4f9ac1f0904d28ea62a252e8de7 12-Mar-2014 Yang Yingliang <yangyingliang@huawei.com> net_sched: return nla_nest_end() instead of skb->len

nla_nest_end() already has return skb->len, so replace
return skb->len with return nla_nest_end instead().

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
a135e598c463baf9497b84e1e92f9a8f96d3521c 02-Mar-2014 Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> sch_tbf: Remove holes in struct tbf_sched_data.

On x86_64 we have 3 holes in struct tbf_sched_data.

The member peak_present can be replaced with peak.rate_bytes_ps,
because peak.rate_bytes_ps is set only when peak is specified in
tbf_change(). tbf_peak_present() is introduced to test
peak.rate_bytes_ps.

The member max_size is moved to fill 32bit hole.

Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
724b9e1d75ab3401aaa081bd4efb440c1b3509db 26-Feb-2014 Hiroaki SHIMODA <shimoda.hiroaki@gmail.com> sch_tbf: Fix potential memory leak in tbf_change().

The allocated child qdisc is not freed in error conditions.
Defer the allocation after user configuration turns out to be
valid and acceptable.

Fixes: cc106e441a63b ("net: sched: tbf: fix the calculation of max_size")
Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com>
Cc: Yang Yingliang <yangyingliang@huawei.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
de960aa9ab4decc3304959f69533eef64d05d8e8 26-Jan-2014 Florian Westphal <fw@strlen.de> net: add and use skb_gso_transport_seglen()

This moves part of Eric Dumazets skb_gso_seglen helper from tbf sched to
skbuff core so it may be reused by upcoming ip forwarding path patch.

Signed-off-by: Florian Westphal <fw@strlen.de>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2e04ad424b03661ec8239acd52146497eb33be1c 20-Dec-2013 Yang Yingliang <yangyingliang@huawei.com> sch_tbf: add TBF_BURST/TBF_PBURST attribute

When we set burst to 1514 with low rate in userspace,
the kernel get a value of burst that less than 1514,
which doesn't work.

Because it may make some loss when transform burst
to buffer in userspace. This makes burst lose some
bytes, when the kernel transform the buffer back to
burst.

This patch adds two new attributes to support sending
burst/mtu to kernel directly to avoid the loss.

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
d55d282e6af88120ad90e93a88f70e3116dc0e3d 12-Dec-2013 Yang Yingliang <yangyingliang@huawei.com> sch_tbf: use do_div() for 64-bit divide

It's doing a 64-bit divide which is not supported
on 32-bit architectures in psched_ns_t2l(). The
correct way to do this is to use do_div().

It's introduced by commit cc106e441a63
("net: sched: tbf: fix the calculation of max_size")

Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
cc106e441a63bec3b1cb72948df82ea15945c449 10-Dec-2013 Yang Yingliang <yangyingliang@huawei.com> net: sched: tbf: fix the calculation of max_size

Current max_size is caluated from rate table. Now, the rate table
has been replaced and it's wrong to caculate max_size based on this
rate table. It can lead wrong calculation of max_size.

The burst in kernel may be lower than user asked, because burst may gets
some loss when transform it to buffer(E.g. "burst 40kb rate 30mbit/s")
and it seems we cannot avoid this loss. Burst's value(max_size) based on
rate table may be equal user asked. If a packet's length is max_size, this
packet will be stalled in tbf_dequeue() because its length is above the
burst in kernel so that it cannot get enough tokens. The max_size guards
against enqueuing packet sizes above q->buffer "time" in tbf_enqueue().

To make consistent with the calculation of tokens, this patch add a helper
psched_ns_t2l() to calculate burst(max_size) directly to fix this problem.

After this fix, we can support to using 64bit rates to calculate burst as well.

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4d0820cf6a55d72350cb2d24a4504f62fbde95d9 23-Nov-2013 Eric Dumazet <edumazet@google.com> sch_tbf: handle too small burst

If a too small burst is inadvertently set on TBF, we might trigger
a bug in tbf_segment(), as 'skb' instead of 'segs' was used in a
qdisc_reshape_fail() call.

tc qdisc add dev eth0 root handle 1: tbf latency 50ms burst 1KB rate
50mbit

Fix the bug, and add a warning, as such configuration is not
going to work anyway for non GSO packets.

(For some reason, one has to use a burst >= 1520 to get a working
configuration, even with old kernels. This is a probable iproute2/tc
bug)

Based on a report and initial patch from Yang Yingliang

Fixes: e43ac79a4bc6 ("sch_tbf: segment too big GSO packets")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
a33c4a2663c19ac01e557d6b78806271eec2a150 08-Nov-2013 Yang Yingliang <yangyingliang@huawei.com> net_sched: tbf: support of 64bit rates

With psched_ratecfg_precompute(), tbf can deal with 64bit rates.
Add two new attributes so that tc can use them to break the 32bit
limit.

Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Suggested-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
3e1e3aae1f5d4e8e5edb7e332f6e265597cc5b0a 19-Sep-2013 Eric Dumazet <edumazet@google.com> net_sched: add u64 rate to psched_ratecfg_precompute()

Add an extra u64 rate parameter to psched_ratecfg_precompute()
so that some qdisc can opt-in for 64bit rates in the future,
to overcome the ~34 Gbits limit.

psched_ratecfg_getrate() reports a legacy structure to
tc utility, so if actual rate is above the 32bit rate field,
cap it to the 34Gbit limit.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
01cb71d2d47b78354358e4bb938bb06323e17498 02-Jun-2013 Eric Dumazet <edumazet@google.com> net_sched: restore "overhead xxx" handling

commit 56b765b79 ("htb: improved accuracy at high rates")
broke the "overhead xxx" handling, as well as the "linklayer atm"
attribute.

tc class add ... htb rate X ceil Y linklayer atm overhead 10

This patch restores the "overhead xxx" handling, for htb, tbf
and act_police

The "linklayer atm" thing needs a separate fix.

Reported-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Vimalkumar <j.vimal@gmail.com>
Cc: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e43ac79a4bc6ca90de4ba10983b4ca39cd215b4b 21-May-2013 Eric Dumazet <edumazet@google.com> sch_tbf: segment too big GSO packets

If a GSO packet has a length above tbf burst limit, the packet
is currently silently dropped.

Current way to handle this is to set the device in non GSO/TSO mode, or
setting high bursts, and its sub optimal.

We can actually segment too big GSO packets, and send individual
segments as tbf parameters allow, allowing for better interoperability.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
b757c9336d63f94c6b57532bb4e8651d8b28786f 12-Feb-2013 Jiri Pirko <jiri@resnulli.us> tbf: improved accuracy at high rates

Current TBF uses rate table computed by the "tc" userspace program,
which has the following issue:

The rate table has 256 entries to map packet lengths to
token (time units). With TSO sized packets, the 256 entry granularity
leads to loss/gain of rate, making the token bucket inaccurate.

Thus, instead of relying on rate table, this patch explicitly computes
the time and accounts for packet transmission times with nanosecond
granularity.

This is a followup to 56b765b79e9a78dc7d3f8850ba5e5567205a3ecd
("htb: improved accuracy at high rates").

Signed-off-by: Jiri Pirko <jiri@resnulli.us>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
1b34ec43c9b3de44a5420841ab293d1b2035a94c 29-Mar-2012 David S. Miller <davem@davemloft.net> pkt_sched: Stop using NLA_PUT*().

These macros contain a hidden goto, and are thus extremely error
prone and make code hard to audit.

Signed-off-by: David S. Miller <davem@davemloft.net>
b0460e4484f9e990caa817230204f9e3800cf955 29-Dec-2011 Eric Dumazet <eric.dumazet@gmail.com> sch_tbf: report backlog information

Provide child qdisc backlog (byte count) information so that "tc -s
qdisc" can report it to user.

qdisc netem 30: root refcnt 18 limit 1000 delay 20.0ms 10.0ms
Sent 948517 bytes 898 pkt (dropped 0, overlimits 0 requeues 1)
rate 175056bit 16pps backlog 114b 1p requeues 1
qdisc tbf 40: parent 30: rate 256000bit burst 20Kb/8 mpu 0b lat 0us
Sent 948517 bytes 898 pkt (dropped 15, overlimits 611 requeues 0)
backlog 18168b 12p requeues 0

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9190b3b3208d052d98cb601fcc192f3f71a5658b 21-Jan-2011 Eric Dumazet <eric.dumazet@gmail.com> net_sched: accurate bytes/packets stats/rates

In commit 44b8288308ac9d (net_sched: pfifo_head_drop problem), we fixed
a problem with pfifo_head drops that incorrectly decreased
sch->bstats.bytes and sch->bstats.packets

Several qdiscs (CHOKe, SFQ, pfifo_head, ...) are able to drop a
previously enqueued packet, and bstats cannot be changed, so
bstats/rates are not accurate (over estimated)

This patch changes the qdisc_bstats updates to be done at dequeue() time
instead of enqueue() time. bstats counters no longer account for dropped
frames, and rates are more correct, since enqueue() bursts dont have
effect on dequeue() rate.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
fd245a4adb5288eac37250875f237c40a20a1944 20-Jan-2011 Eric Dumazet <eric.dumazet@gmail.com> net_sched: move TCQ_F_THROTTLED flag

In commit 371121057607e (net: QDISC_STATE_RUNNING dont need atomic bit
ops) I moved QDISC_STATE_RUNNING flag to __state container, located in
the cache line containing qdisc lock and often dirtied fields.

I now move TCQ_F_THROTTLED bit too, so that we let first cache line read
mostly, and shared by all cpus. This should speedup HTB/CBQ for example.

Not using test_bit()/__clear_bit()/__test_and_set_bit allows to use an
"unsigned int" for __state container, reducing by 8 bytes Qdisc size.

Introduce helpers to hide implementation details.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
CC: Patrick McHardy <kaber@trash.net>
CC: Jesper Dangaard Brouer <hawk@diku.dk>
CC: Jarek Poplawski <jarkao2@gmail.com>
CC: Jamal Hadi Salim <hadi@cyberus.ca>
CC: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
cc7ec456f82da7f89a5b376e613b3ac4311b3e9a 19-Jan-2011 Eric Dumazet <eric.dumazet@gmail.com> net_sched: cleanups

Cleanup net/sched code to current CodingStyle and practices.

Reduce inline abuse

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
bfe0d0298f2a67d94d58c39ea904a999aeeb7c3c 09-Jan-2011 Eric Dumazet <eric.dumazet@gmail.com> net_sched: factorize qdisc stats handling

HTB takes into account skb is segmented in stats updates.
Generalize this to all schedulers.

They should use qdisc_bstats_update() helper instead of manipulating
bstats.bytes and bstats.packets

Add bstats_update() helper too for classes that use
gnet_stats_basic_packed fields.

Note : Right now, TCQ_F_CAN_BYPASS shortcurt can be taken only if no
stab is setup on qdisc.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
9871e50edd25e2adf69b369817100821cb1e6de8 10-Aug-2010 Ben Greear <greearb@candelatech.com> net: Use NET_XMIT_SUCCESS where possible.

This is based on work originally done by Patric McHardy.

Signed-off-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
f0cd15081a72075df16c45a2310e873fb9fcd82f 14-May-2010 stephen hemminger <shemminger@vyatta.com> tbf: stop wanton destruction of children (v2)

Several netem users use TBF for rate control. But every time the parameters
of TBF are changed it destroys the child qdisc, requiring reconfigation.
Better to just keep child qdisc and just notify it of changed limit.

Signed-off-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
5b9a9ccfad8553dbf7a9b17ba78bad70215ed0e2 04-Sep-2009 Patrick McHardy <kaber@trash.net> net_sched: remove some unnecessary checks in classful schedulers

The class argument to the ->graft(), ->leaf(), ->dump(), ->dump_stats() all
originate from either ->get() or ->walk() and are always valid.

Remove unnecessary checks.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
de6d5cdf881353f83006d5f3e28ac4fffd42145e 04-Sep-2009 Patrick McHardy <kaber@trash.net> net_sched: make cls_ops->change and cls_ops->delete optional

Some schedulers don't support creating, changing or deleting classes.
Make the respective callbacks optionally and consistently return
-EOPNOTSUPP for unsupported operations, instead of currently either
-EOPNOTSUPP, -ENOSYS or no error.

In case of sch_prio and sch_multiq, the removed operations additionally
checked for an invalid class. This is not necessary since the class
argument can only orginate from ->get() or in case of ->change is 0
for creation of new classes, in which case ->change() incorrectly
returned -ENOENT.

As a side-effect, this patch fixes a possible (root-only) NULL pointer
function call in sch_ingress, which didn't implement a so far mandatory
->delete() operation.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
71ebe5e91947392bc276af713827eab12b6db8e4 04-Sep-2009 Patrick McHardy <kaber@trash.net> net_sched: make cls_ops->tcf_chain() optional

Some qdiscs don't support attaching filters. Handle this centrally in
cls_api and return a proper errno code (EOPNOTSUPP) instead of EINVAL.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
a0bffffc148cd8e75a48a89ad2ddb74e4081a20a 21-Mar-2009 Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> net/*: use linux/kernel.h swap()

tcp_sack_swap seems unnecessary so I pushed swap to the caller.
Also removed comment that seemed then pointless, and added include
when not already there. Compile tested.

Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
b94c8afcba3ae6584653b98e315446ea83be6ea5 20-Nov-2008 Patrick McHardy <kaber@trash.net> pkt_sched: remove unnecessary xchg() in packet schedulers

The use of xchg() hasn't been necessary since 2.2.something when proper
locking was added to packet schedulers.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
f30ab418a1d3c5a8b83493e7d70d6876a74aa0ce 14-Nov-2008 Jarek Poplawski <jarkao2@gmail.com> pkt_sched: Remove qdisc->ops->requeue() etc.

After implementing qdisc->ops->peek() and changing sch_netem into
classless qdisc there are no more qdisc->ops->requeue() users. This
patch removes this method with its wrappers (qdisc_requeue()), and
also unused qdisc->requeue structure. There are a few minor fixes of
warnings (htb_enqueue()) and comments btw.

The idea to kill ->requeue() and a similar patch were first developed
by David S. Miller.

Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
77be155cba4e163e8bba9fd27222a8b6189ec4f7 31-Oct-2008 Jarek Poplawski <jarkao2@gmail.com> pkt_sched: Add peek emulation for non-work-conserving qdiscs.

This patch adds qdisc_peek_dequeued() wrapper to emulate peek method
with qdisc->dequeue() and storing "peeked" skb in qdisc->gso_skb until
dequeuing. This is mainly for compatibility reasons not to break some
strange configs because peeking is expected for non-work-conserving
parent qdiscs to query work-conserving child qdiscs.

This implementation requires using qdisc_dequeue_peeked() wrapper
instead of directly calling qdisc->dequeue() for all qdiscs ever
querried with qdisc->ops->peek() or qdisc_peek_dequeued().

Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
03c05f0d4bb0c267edf12d614025a40e33c5a6f9 31-Oct-2008 Jarek Poplawski <jarkao2@gmail.com> pkt_sched: Use qdisc->ops->peek() instead of ->dequeue() & ->requeue()

Use qdisc->ops->peek() instead of ->dequeue() & ->requeue() pair.
After this patch the only remaining user of qdisc->ops->requeue() is
netem_enqueue(). Based on ideas of Herbert Xu, Patrick McHardy and
David S. Miller.

Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
69747650c814a8a79fef412c7416adf823293a3e 18-Aug-2008 David S. Miller <davem@davemloft.net> pkt_sched: Fix return value corruption in HTB and TBF.

Based upon a bug report by Josip Rodin.

Packet schedulers should only return NET_XMIT_DROP iff
the packet really was dropped. If the packet does reach
the device after we return NET_XMIT_DROP then TCP can
crash because it depends upon the enqueue path return
values being accurate.

Signed-off-by: David S. Miller <davem@davemloft.net>
378a2f090f7a478704a372a4869b8a9ac206234e 05-Aug-2008 Jarek Poplawski <jarkao2@gmail.com> net_sched: Add qdisc __NET_XMIT_STOLEN flag

Patrick McHardy <kaber@trash.net> noticed:
"The other problem that affects all qdiscs supporting actions is
TC_ACT_QUEUED/TC_ACT_STOLEN getting mapped to NET_XMIT_SUCCESS
even though the packet is not queued, corrupting upper qdiscs'
qlen counters."

and later explained:
"The reason why it translates it at all seems to be to not increase
the drops counter. Within a single qdisc this could be avoided by
other means easily, upper qdiscs would still increase the counter
when we return anything besides NET_XMIT_SUCCESS though.

This means we need a new NET_XMIT return value to indicate this to
the upper qdiscs. So I'd suggest to introduce NET_XMIT_STOLEN,
return that to upper qdiscs and translate it to NET_XMIT_SUCCESS
in dev_queue_xmit, similar to NET_XMIT_BYPASS."

David Miller <davem@davemloft.net> noticed:
"Maybe these NET_XMIT_* values being passed around should be a set of
bits. They could be composed of base meanings, combined with specific
attributes.

So you could say "NET_XMIT_DROP | __NET_XMIT_NO_DROP_COUNT"

The attributes get masked out by the top-level ->enqueue() caller,
such that the base meanings are the only thing that make their
way up into the stack. If it's only about communication within the
qdisc tree, let's simply code it that way."

This patch is trying to realize these ideas.

Signed-off-by: Jarek Poplawski <jarkao2@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
0abf77e55a2459aa9905be4b226e4729d5b4f0cb 20-Jul-2008 Jussi Kivilinna <jussi.kivilinna@mbnet.fi> net_sched: Add accessor function for packet length for qdiscs

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
5f86173bdf15981ca49d0434f638b68f70a35644 20-Jul-2008 Jussi Kivilinna <jussi.kivilinna@mbnet.fi> net_sched: Add qdisc_enqueue wrapper

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: David S. Miller <davem@davemloft.net>
fb0305ce1b03f6ff17f84f2c63daccecb45f2805 06-Jul-2008 Patrick McHardy <kaber@trash.net> net-sched: consolidate default fifo qdisc setup

Signed-off-by: Patrick McHardy <kaber@trash.net>
Acked-by: Stephen Hemminger <shemminger@vyatta.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
27a3421e4821734bc19496faa77b380605dc3b23 24-Jan-2008 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Use nla_policy for attribute validation in packet schedulers

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
4b3550ef530cfc153fa91f0b37cbda448bad11c6 24-Jan-2008 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Use nla_nest_start/nla_nest_end

Use nla_nest_start/nla_nest_end for dumping nested attributes.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
cee63723b358e594225e812d6e14a2a0abfd5c88 24-Jan-2008 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Propagate nla_parse return value

nla_parse() returns more detailed errno codes, propagate them back on
error.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
1e90474c377e92db7262a8968a45c1dd980ca9e5 23-Jan-2008 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Convert packet schedulers from rtnetlink to new netlink API

Convert packet schedulers to use the netlink API. Unfortunately a gradual
conversion is not possible without breaking compilation in the middle or
adding lots of casts, so this patch converts them all in one step. The
patch has been mostly generated automatically with some minor edits to
at least allow seperate conversion of classifiers and actions.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
20fea08b5fb639c4c175b5c74a2bb346c5c5bc2e 14-Nov-2007 Eric Dumazet <dada1@cosmosbay.com> [NET]: Move Qdisc_class_ops and Qdisc_ops in appropriate sections.

Qdisc_class_ops are const, and Qdisc_ops are mostly read.

Using "const" and "__read_mostly" qualifiers helps to reduce false
sharing.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
e9bef55d3d062ee7a78fde2913ec87ca9305a1e0 12-Sep-2007 Jesper Dangaard Brouer <hawk@comx.dk> [NET_SCHED]: Cleanup L2T macros and handle oversized packets

Change L2T (length to time) macros, in all rate based schedulers, to
call a common function qdisc_l2t() that does the rate table lookup.
This function handles if the packet size lookup is larger than the
rate table, which often occurs with TSO enabled.

Signed-off-by: Jesper Dangaard Brouer <hawk@comx.dk>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
c3bc7cff8fddb6ff9715be8bfc3d911378c4d69d 15-Jul-2007 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Kill CONFIG_NET_CLS_POLICE

The NET_CLS_ACT option is now a full replacement for NET_CLS_POLICE,
remove the old code. The config option will be kept around to select
the equivalent NET_CLS_ACT options for a short time to allow easier
upgrades.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
73ca4918fbb98311421259d82ef4ab44feeace43 15-Jul-2007 Patrick McHardy <kaber@trash.net> [NET_SCHED]: act_api: qdisc internal reclassify support

The behaviour of NET_CLS_POLICE for TC_POLICE_RECLASSIFY was to return
it to the qdisc, which could handle it internally or ignore it. With
NET_CLS_ACT however, tc_classify starts over at the first classifier
and never returns it to the qdisc. This makes it impossible to support
qdisc-internal reclassification, which in turn makes it impossible to
remove the old NET_CLS_POLICE code without breaking compatibility since
we have two qdiscs (CBQ and ATM) that support this.

This patch adds a tc_classify_compat function that handles
reclassification the old way and changes CBQ and ATM to use it.

This again is of course not fully backwards compatible with the previous
NET_CLS_ACT behaviour. Unfortunately there is no way to fully maintain
compatibility *and* support qdisc internal reclassification with
NET_CLS_ACT, but this seems like the better choice over keeping the two
incompatible options around forever.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
0ba48053831d5b89ee2afaefaae1c06eae80cb05 03-Jul-2007 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Remove unnecessary includes

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
3bebcda28077375470dd60545b71bba2f83335fd 23-Mar-2007 Patrick McHardy <kaber@trash.net> [NET_SCHED]: turn PSCHED_GET_TIME into inline function

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
03cc45c0a5b9b7f74768feb43b9a2525d203bbdb 23-Mar-2007 Patrick McHardy <kaber@trash.net> [NET_SCHED]: turn PSCHED_TDIFF_SAFE into inline function

Also rename to psched_tdiff_bounded.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
dc5fc579b90ed0a9a4e55b0218cdbaf0a8cf2e67 26-Mar-2007 Arnaldo Carvalho de Melo <acme@redhat.com> [NETLINK]: Use nlmsg_trim() where appropriate

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
27a884dc3cb63b93c2b3b643f5b31eed5f8a4d26 20-Apr-2007 Arnaldo Carvalho de Melo <acme@redhat.com> [SK_BUFF]: Convert skb->tail to sk_buff_data_t

So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes
on 64bit architectures, allowing us to combine the 4 bytes hole left by the
layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4
64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN...
:-)

Many calculations that previously required that skb->{transport,network,
mac}_header be first converted to a pointer now can be done directly, being
meaningful as offsets or pointers.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
f7f593e383145931cb2a65df62c31ce1bcc0cffc 16-Mar-2007 Patrick McHardy <kaber@trash.net> [NET_SCHED]: sch_tbf: use hrtimer based watchdog

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
10297b99315e5e08fe623ba56da35db1fee69ba9 09-Feb-2007 YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> [NET] SCHED: Fix whitespace errors.

Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
e488eafcc50be296f0d1e1fd67c6b5d865183011 30-Nov-2006 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Fix endless loops (part 5): netem/tbf/hfsc ->requeue failures

When peeking at the next packet in a child qdisc by calling dequeue/requeue,
the upper qdisc qlen counter may get out of sync in case the requeue fails.
The qdisc and the child qdisc both have their counter decremented, but since
no packet is given to the upper qdisc it won't decrement its counter itself.

requeue should not fail, so this is mostly for "correctness".

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
5e50da01d0ce7ef0ba3ed6cfabd62f327da0aca6 30-Nov-2006 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Fix endless loops (part 2): "simple" qdiscs

Convert the "simple" qdiscs to use qdisc_tree_decrease_qlen() where
necessary:

- all graft operations
- destruction of old child qdiscs in prio, red and tbf change operation
- purging of queue in sfq change operation

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
9f9afec48221fe4a19f84a9341f5b304bf7d7783 30-Nov-2006 Patrick McHardy <kaber@trash.net> [NET_SCHED]: Set parent classid in default qdiscs

Set parent classids in default qdiscs to allow walking up the tree
from outside the qdiscs. This is needed by the next patch.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
6ab3d5624e172c553004ecc862bfeac16d9d68b7 30-Jun-2006 Jörn Engel <joern@wohnheim.fh-wedel.de> Remove obsolete #include <linux/config.h>

Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
053cfed75d9e01bda274c5b0126f5937181dcb62 21-Mar-2006 Patrick McHardy <kaber@trash.net> [PKT_SCHED]: Restore TBF change semantic

When TBF was converted to a classful qdisc, the semantic of the limit
parameter was broken. On initilization an inner bfifo qdisc is created
for backwards compatibility, when changing parameters however the new
limit is ignored and the current child qdisc remains in place.

Always replace the child qdisc by the default bfifo when limit is above
zero, otherwise don't touch the inner qdisc. Current tc version enforce
a limit above zero, other users can avoid creating the inner qdisc by
using zero.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
6d037a26f08711a222ed0d3d12b09e93eed7d3e8 21-Mar-2006 Patrick McHardy <kaber@trash.net> [PKT_SCHED]: Qdisc drop operation is optional

The drop operation is optional and qdiscs must check if childs support it.

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
1da177e4c3f41524e886b7f1b8a0c1fc7321cac2 17-Apr-2005 Linus Torvalds <torvalds@ppc970.osdl.org> Linux-2.6.12-rc2

Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!