History log of /net/netfilter/xt_connlimit.c
Revision Date Author Comments
e00b437b3d6d4d26ecd95108b575ee1bcfcb478f 20-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: move lock array out of struct connlimit_data

Eric points out that the locks can be global.
Moreover, both Jesper and Eric note that using only 32 locks increases
false sharing as only two cache lines are used.

This increases locks to 256 (16 cache lines assuming 64byte cacheline and
4 bytes per spinlock).

Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
e5ac6eafba887821044c65b6fe59d9eb8b7c7f61 17-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: fix UP build

cannot use ARRAY_SIZE() if spinlock_t is empty struct.

Fixes: 1442e7507dd597 ("netfilter: connlimit: use keyed locks")
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
7d08487777c8b30dea34790734d708470faaf1e5 12-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: use rbtree for per-host conntrack obj storage

With current match design every invocation of the connlimit_match
function means we have to perform (number_of_conntracks % 256) lookups
in the conntrack table [ to perform GC/delete stale entries ].
This is also the reason why ____nf_conntrack_find() in perf top has
> 20% cpu time per core.

This patch changes the storage to rbtree which cuts down the number of
ct objects that need testing.

When looking up a new tuple, we only test the connections of the host
objects we visit while searching for the wanted host/network (or
the leaf we need to insert at).

The slot count is reduced to 32. Increasing slot count doesn't
speed up things much because of rbtree nature.

before patch (50kpps rx, 10kpps tx):
+ 20.95% ksoftirqd/0 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.50% ksoftirqd/1 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.27% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 5.76% ksoftirqd/1 [nf_conntrack] [k] hash_conntrack_raw
+ 5.39% ksoftirqd/2 [nf_conntrack] [k] hash_conntrack_raw
+ 5.35% ksoftirqd/0 [nf_conntrack] [k] hash_conntrack_raw

after (90kpps, 51kpps tx):
+ 17.24% swapper [nf_conntrack] [k] ____nf_conntrack_find
+ 6.60% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 2.73% swapper [nf_conntrack] [k] hash_conntrack_raw
+ 2.36% swapper [xt_connlimit] [k] count_tree

Obvious disadvantages to previous version are the increase in code
complexity and the increased memory cost.

Partially based on Eric Dumazets fq scheduler.

Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
50e0e9b12914dd82d1ece22d57bf8c146a1d1b52 12-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: make same_source_net signed

currently returns 1 if they're the same. Make it work like mem/strcmp
so it can be used as rbtree search function.

Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
1442e7507dd597cc701b224d3cc9bf1f165e928b 12-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: use keyed locks

connlimit currently suffers from spinlock contention, example for
4-core system with rps enabled:

+ 20.84% ksoftirqd/2 [kernel.kallsyms] [k] _raw_spin_lock_bh
+ 20.76% ksoftirqd/1 [kernel.kallsyms] [k] _raw_spin_lock_bh
+ 20.42% ksoftirqd/0 [kernel.kallsyms] [k] _raw_spin_lock_bh
+ 6.07% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 6.07% ksoftirqd/1 [nf_conntrack] [k] ____nf_conntrack_find
+ 5.97% ksoftirqd/0 [nf_conntrack] [k] ____nf_conntrack_find
+ 2.47% ksoftirqd/2 [nf_conntrack] [k] hash_conntrack_raw
+ 2.45% ksoftirqd/0 [nf_conntrack] [k] hash_conntrack_raw
+ 2.44% ksoftirqd/1 [nf_conntrack] [k] hash_conntrack_raw

May allow parallel lookup/insert/delete if the entry is hashed to
another slot. With patch:

+ 20.95% ksoftirqd/0 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.50% ksoftirqd/1 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.27% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 5.76% ksoftirqd/1 [nf_conntrack] [k] hash_conntrack_raw
+ 5.39% ksoftirqd/2 [nf_conntrack] [k] hash_conntrack_raw
+ 5.35% ksoftirqd/0 [nf_conntrack] [k] hash_conntrack_raw
+ 2.00% ksoftirqd/1 [kernel.kallsyms] [k] __rcu_read_unlock

Improved rx processing rate from ~35kpps to ~50 kpps.

Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
14e1a977767e95ca48504975efff2bdf1b198ca0 07-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: use kmem_cache for conn objects

We might allocate thousands of these (one object per connection).
Use distinct kmem cache to permit simplte tracking on how many
objects are currently used by the connlimit match via the sysfs.

Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
3bcc5fdf1b1a00be162159c420ea04e0adf709ec 07-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: move insertion of new element out of count function

Allows easier code-reuse in followup patches.

Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
d9ec4f1ee280e5f8732e3c40ca672419b2532600 07-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: improve packet-to-closed-connection logic

Instead of freeing the entry from our list and then adding
it back again in the 'packet to closing connection' case just keep the
matching entry around. Also drop the found_ct != NULL test as
nf_ct_tuplehash_to_ctrack is just container_of().

Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
15cfd52895751e8f36b48b8ad33f1d68b59611e2 07-Mar-2014 Florian Westphal <fw@strlen.de> netfilter: connlimit: factor hlist search into new function

Simplifies followup patch that introduces separate locks for each of
the hash slots.

Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2a50d805e59ed18265fca44825719f35927af8af 06-Jan-2014 Pablo Neira Ayuso <pablo@netfilter.org> Revert "netfilter: avoid get_random_bytes calls"

This reverts commit a42b99a6e329654d376b330de057eff87686d890.

Hannes Frederic Sowa reported some problems with this patch, more specifically
that prandom_u32() may not be ready at boot time, see:

http://marc.info/?l=linux-netdev&m=138896532403533&w=2

Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
a42b99a6e329654d376b330de057eff87686d890 19-Dec-2013 Florian Westphal <fw@strlen.de> netfilter: avoid get_random_bytes calls

All these users need an initial seed value for jhash, prandom is
perfectly fine. This avoids draining the entropy pool where
its not strictly required.

nfnetlink_log did not use the random value at all.

Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
b67bfe0d42cac56c512dd5da4b1b347a23f4b70a 28-Feb-2013 Sasha Levin <sasha.levin@oracle.com> hlist: drop the node parameter from iterators

I'm not sure why, but the hlist for each entry iterators were conceived

list_for_each_entry(pos, head, member)

The hlist ones were greedy and wanted an extra parameter:

hlist_for_each_entry(tpos, pos, head, member)

Why did they need an extra pos parameter? I'm not quite sure. Not only
they don't really need it, it also prevents the iterator from looking
exactly like the list iterator, which is unfortunate.

Besides the semantic patch, there was some manual work required:

- Fix up the actual hlist iterators in linux/list.h
- Fix up the declaration of other iterators based on the hlist ones.
- A very small amount of places were using the 'node' parameter, this
was modified to use 'obj->member' instead.
- Coccinelle didn't handle the hlist_for_each_entry_safe iterator
properly, so those had to be fixed up manually.

The semantic patch which is mostly the work of Peter Senna Tschudin is here:

@@
iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;

type T;
expression a,c,d,e;
identifier b;
statement S;
@@

-T b;
<+... when != b
(
hlist_for_each_entry(a,
- b,
c, d) S
|
hlist_for_each_entry_continue(a,
- b,
c) S
|
hlist_for_each_entry_from(a,
- b,
c) S
|
hlist_for_each_entry_rcu(a,
- b,
c, d) S
|
hlist_for_each_entry_rcu_bh(a,
- b,
c, d) S
|
hlist_for_each_entry_continue_rcu_bh(a,
- b,
c) S
|
for_each_busy_worker(a, c,
- b,
d) S
|
ax25_uid_for_each(a,
- b,
c) S
|
ax25_for_each(a,
- b,
c) S
|
inet_bind_bucket_for_each(a,
- b,
c) S
|
sctp_for_each_hentry(a,
- b,
c) S
|
sk_for_each(a,
- b,
c) S
|
sk_for_each_rcu(a,
- b,
c) S
|
sk_for_each_from
-(a, b)
+(a)
S
+ sk_for_each_from(a) S
|
sk_for_each_safe(a,
- b,
c, d) S
|
sk_for_each_bound(a,
- b,
c) S
|
hlist_for_each_entry_safe(a,
- b,
c, d, e) S
|
hlist_for_each_entry_continue_rcu(a,
- b,
c) S
|
nr_neigh_for_each(a,
- b,
c) S
|
nr_neigh_for_each_safe(a,
- b,
c, d) S
|
nr_node_for_each(a,
- b,
c) S
|
nr_node_for_each_safe(a,
- b,
c, d) S
|
- for_each_gfn_sp(a, c, d, b) S
+ for_each_gfn_sp(a, c, d) S
|
- for_each_gfn_indirect_valid_sp(a, c, d, b) S
+ for_each_gfn_indirect_valid_sp(a, c, d) S
|
for_each_host(a,
- b,
c) S
|
for_each_host_safe(a,
- b,
c, d) S
|
for_each_mesh_entry(a,
- b,
c, d) S
)
...+>

[akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
[akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
[akpm@linux-foundation.org: checkpatch fixes]
[akpm@linux-foundation.org: fix warnings]
[akpm@linux-foudnation.org: redo intrusive kvm changes]
Tested-by: Peter Senna Tschudin <peter.senna@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
68c07cb6d8aa05daf38ab47d5bb674d81a2066fb 19-May-2012 Cong Wang <xiyou.wangcong@gmail.com> netfilter: xt_connlimit: remove revision 0

It was scheduled to be removed.

Cc: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
4656c4d61adb8dc3ee04c08f57a5cc7598814420 15-Mar-2011 Changli Gao <xiaosuo@gmail.com> netfilter: xt_connlimit: remove connlimit_rnd_inited

A potential race condition when generating connlimit_rnd is also fixed.

Signed-off-by: Changli Gao <xiaosuo@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
3e0d5149e6dcbe7111a63773a07c5b33f7ca7236 15-Mar-2011 Changli Gao <xiaosuo@gmail.com> netfilter: xt_connlimit: use hlist instead

The header of hlist is smaller than list.

Signed-off-by: Changli Gao <xiaosuo@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
0e23ca14f8e76091b402c01e2b169aba3d187b98 15-Mar-2011 Changli Gao <xiaosuo@gmail.com> netfilter: xt_connlimit: use kmalloc() instead of kzalloc()

All the members are initialized after kzalloc().

Signed-off-by: Changli Gao <xiaosuo@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
8183e3a88aced228ab9770762692be6cc3786e80 15-Mar-2011 Changli Gao <xiaosuo@gmail.com> netfilter: xt_connlimit: fix daddr connlimit in SNAT scenario

We use the reply tuples when limiting the connections by the destination
addresses, however, in SNAT scenario, the final reply tuples won't be
ready until SNAT is done in POSTROUING or INPUT chain, and the following
nf_conntrack_find_get() in count_tem() will get nothing, so connlimit
can't work as expected.

In this patch, the original tuples are always used, and an additional
member addr is appended to save the address in either end.

Signed-off-by: Changli Gao <xiaosuo@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
20b7975e5aefc7fd08b7f582f3901b1669725cd0 14-Feb-2011 Stefan Berger <stefanb@linux.vnet.ibm.com> Revert "netfilter: xt_connlimit: connlimit-above early loop termination"

This reverts commit 44bd4de9c2270b22c3c898310102bc6be9ed2978.

I have to revert the early loop termination in connlimit since it generates
problems when an iptables statement does not use -m state --state NEW before
the connlimit match extension.

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
44bd4de9c2270b22c3c898310102bc6be9ed2978 11-Feb-2011 Stefan Berger <stefanb@linux.vnet.ibm.com> netfilter: xt_connlimit: connlimit-above early loop termination

The patch below introduces an early termination of the loop that is
counting matches. It terminates once the counter has exceeded the
threshold provided by the user. There's no point in continuing the loop
afterwards and looking at other entries.

It plays together with the following code further below:

return (connections > info->limit) ^ info->inverse;

where connections is the result of the counted connection, which in turn
is the matches variable in the loop. So once

-> matches = info->limit + 1
alias -> matches > info->limit
alias -> matches > threshold

we can terminate the loop.

Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
ad86e1f27a9a97a9e50810b10bca678407b1d6fd 26-Jan-2011 Jan Engelhardt <jengelh@medozas.de> netfilter: xt_connlimit: pick right dstaddr in NAT scenario

xt_connlimit normally records the "original" tuples in a hashlist
(such as "1.2.3.4 -> 5.6.7.8"), and looks in this list for iph->daddr
when counting.

When the user however uses DNAT in PREROUTING, looking for
iph->daddr -- which is now 192.168.9.10 -- will not match. Thus in
daddr mode, we need to record the reverse direction tuple
("192.168.9.10 -> 1.2.3.4") instead. In the reverse tuple, the dst
addr is on the src side, which is convenient, as count_them still uses
&conn->tuple.src.u3.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
cc4fc022571376412986e27e08b0765e9cb2aafb 18-Jan-2011 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: connlimit revision 1

This adds destination address-based selection. The old "inverse"
member is overloaded (memory-wise) with a new "flags" variable,
similar to how J.Park did it with xt_string rev 1. Since revision 0
userspace only sets flag 0x1, no great changes are made to explicitly
test for different revisions.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
1cc34c30be0e27d4ba8c1ce04a8a4f46c927d121 18-Jan-2011 Richard Weinberger <richard@nod.at> netfilter: xt_connlimit: use hotdrop jump mark

Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
b4ba26119b06052888696491f614201817491a0d 07-Jul-2009 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: change hotdrop pointer to direct modification

Since xt_action_param is writable, let's use it. The pointer to
'bool hotdrop' always worried (8 bytes (64-bit) to write 1 byte!).
Surprisingly results in a reduction in size:

text data bss filename
5457066 692730 357892 vmlinux.o-prev
5456554 692730 357892 vmlinux.o

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
62fc8051083a334578c3f4b3488808f210b4565f 07-Jul-2009 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: deconstify struct xt_action_param for matches

In future, layer-3 matches will be an xt module of their own, and
need to set the fragoff and thoff fields. Adding more pointers would
needlessy increase memory requirements (esp. so for 64-bit, where
pointers are wider).

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
4b560b447df83368df44bd3712c0c39b1d79ba04 05-Jul-2009 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: substitute temporary defines by final name

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
5a0e3ad6af8660be21ca98a971cd00f331318c05 24-Mar-2010 Tejun Heo <tj@kernel.org> include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h

percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.

percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.

http://userweb.kernel.org/~tj/misc/slabh-sweep.py

The script does the followings.

* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.

* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.

* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.

The conversion was done in the following steps.

1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.

2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.

3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.

4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.

5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.

6. percpu.h was updated not to include slab.h.

7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).

* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig

8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.

Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
4a5a5c73b7cfee46a0b1411903cfa0dea532deec 19-Mar-2010 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: slightly better error reporting

When extended status codes are available, such as ENOMEM on failed
allocations, or subsequent functions (e.g. nf_ct_get_l3proto), passing
them up to userspace seems like a good idea compared to just always
EINVAL.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
bd414ee605ff3ac5fcd79f57269a897879ee4cde 23-Mar-2010 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: change matches to return error code

The following semantic patch does part of the transformation:
// <smpl>
@ rule1 @
struct xt_match ops;
identifier check;
@@
ops.checkentry = check;

@@
identifier rule1.check;
@@
check(...) { <...
-return true;
+return 0;
...> }

@@
identifier rule1.check;
@@
check(...) { <...
-return false;
+return -EINVAL;
...> }
// </smpl>

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
b0f38452ff73da7e9e0ddc68cd5c6b93c897ca0d 19-Mar-2010 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: change xt_match.checkentry return type

Restore function signatures from bool to int so that we can report
memory allocation failures or similar using -ENOMEM rather than
always having to pass -EINVAL back.

This semantic patch may not be too precise (checking for functions
that use xt_mtchk_param rather than functions referenced by
xt_match.checkentry), but reviewed, it produced the intended result.

// <smpl>
@@
type bool;
identifier check, par;
@@
-bool check
+int check
(struct xt_mtchk_param *par) { ... }
// </smpl>

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
8bee4bad03c5b601bd6cea123c31025680587ccc 17-Mar-2010 Jan Engelhardt <jengelh@medozas.de> netfilter: xt extensions: use pr_<level>

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
408ffaa4a11ddd6f730be520479fd5cd890c57d3 28-Feb-2010 Jan Engelhardt <jengelh@medozas.de> netfilter: update my email address

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
5d0aa2ccd4699a01cfdf14886191c249d7b45a01 15-Feb-2010 Patrick McHardy <kaber@trash.net> netfilter: nf_conntrack: add support for "conntrack zones"

Normally, each connection needs a unique identity. Conntrack zones allow
to specify a numerical zone using the CT target, connections in different
zones can use the same identity.

Example:

iptables -t raw -A PREROUTING -i veth0 -j CT --zone 1
iptables -t raw -A OUTPUT -o veth1 -j CT --zone 1

Signed-off-by: Patrick McHardy <kaber@trash.net>
83fc81024bd8572f31db784f8c0079e999a4fa44 18-Jan-2010 Alexey Dobriyan <adobriyan@gmail.com> netfilter: xt_connlimit: netns support

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
294188ae32f984a072c64c959354b2f6f52f80a7 04-Jan-2010 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: obtain random bytes earlier, in checkentry

We can initialize the random hash bytes on checkentry. This is
preferable since it is outside the hot path.

Reference: http://bugzilla.netfilter.org/show_bug.cgi?id=621
Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
539054a8fa5141c9a4e9ac6a86d249e3f2bdef45 07-Nov-2009 Jan Engelhardt <jengelh@medozas.de> netfilter: xt_connlimit: fix regression caused by zero family value

Commit v2.6.28-rc1~717^2~109^2~2 was slightly incomplete; not all
instances of par->match->family were changed to par->family.

References: http://bugzilla.netfilter.org/show_bug.cgi?id=610
Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
ea781f197d6a835cbb93a0bf88ee1696296ed8aa 25-Mar-2009 Eric Dumazet <dada1@cosmosbay.com> netfilter: nf_conntrack: use SLAB_DESTROY_BY_RCU and get rid of call_rcu()

Use "hlist_nulls" infrastructure we added in 2.6.29 for RCUification of UDP & TCP.

This permits an easy conversion from call_rcu() based hash lists to a
SLAB_DESTROY_BY_RCU one.

Avoiding call_rcu() delay at nf_conn freeing time has numerous gains.

First, it doesnt fill RCU queues (up to 10000 elements per cpu).
This reduces OOM possibility, if queued elements are not taken into account
This reduces latency problems when RCU queue size hits hilimit and triggers
emergency mode.

- It allows fast reuse of just freed elements, permitting better use of
CPU cache.

- We delete rcu_head from "struct nf_conn", shrinking size of this structure
by 8 or 16 bytes.

This patch only takes care of "struct nf_conn".
call_rcu() is still used for less critical conntrack parts, that may
be converted later if necessary.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
92f3b2b1bc968caaabee8cd78bee75ab7c4af74e 08-Oct-2008 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: cut down on static data for family-independent extensions

Using ->family in struct xt_*_param, multiple struct xt_{match,target}
can be squashed together.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
6be3d8598e883fb632edf059ba2f8d1b9f4da138 08-Oct-2008 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: move extension arguments into compound structure (3/6)

This patch does this for match extensions' destroy functions.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
9b4fce7a3508a9776534188b6065b206a9608ccf 08-Oct-2008 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: move extension arguments into compound structure (2/6)

This patch does this for match extensions' checkentry functions.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
f7108a20dee44e5bb037f9e48f6a207b42e6ae1c 08-Oct-2008 Jan Engelhardt <jengelh@medozas.de> netfilter: xtables: move extension arguments into compound structure (1/6)

The function signatures for Xtables extensions have grown over time.
It involves a lot of typing/replication, and also a bit of stack space
even if they are not used. Realize an NFWS2008 idea and pack them into
structs. The skb remains outside of the struct so gcc can continue to
apply its optimizations.

This patch does this for match extensions' match functions.

A few ambiguities have also been addressed. The "offset" parameter for
example has been renamed to "fragoff" (there are so many different
offsets already) and "protoff" to "thoff" (there is more than just one
protocol here, so clarify).

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
400dad39d1c33fe797e47326d87a3f54d0ac5181 08-Oct-2008 Alexey Dobriyan <adobriyan@gmail.com> netfilter: netns nf_conntrack: per-netns conntrack hash

* make per-netns conntrack hash

Other solution is to add ->ct_net pointer to tuplehashes and still has one
hash, I tried that it's ugly and requires more code deep down in protocol
modules et al.

* propagate netns pointer to where needed, e. g. to conntrack iterators.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Patrick McHardy <kaber@trash.net>
ee999d8b9573df1b547aacdc6d79f86eb79c25cd 08-Oct-2008 Jan Engelhardt <jengelh@medozas.de> netfilter: x_tables: use NFPROTO_* in extensions

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
76108cea065cda58366d16a7eb6ca90d717a1396 08-Oct-2008 Jan Engelhardt <jengelh@medozas.de> netfilter: Use unsigned types for hooknum and pf vars

and (try to) consistently use u_int8_t for the L3 family.

Signed-off-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
d2ee3f2c4b1db1320c1efb4dcaceeaf6c7e6c2d3 04-Jun-2008 Dong Wei <dwei.zh@gmail.com> netfilter: xt_connlimit: fix accouning when receive RST packet in ESTABLISHED state

In xt_connlimit match module, the counter of an IP is decreased when
the TCP packet is go through the chain with ip_conntrack state TW.
Well, it's very natural that the server and client close the socket
with FIN packet. But when the client/server close the socket with RST
packet(using so_linger), the counter for this connection still exsit.
The following patch can fix it which is based on linux-2.6.25.4

Signed-off-by: Dong Wei <dwei.zh@gmail.com>
Acked-by: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
5e8fbe2ac8a3f1e34e7004c5750ef59bf9304f82 14-Apr-2008 Patrick McHardy <kaber@trash.net> [NETFILTER]: nf_conntrack: add tuplehash l3num/protonum accessors

Add accessors for l3num and protonum and get rid of some overly long
expressions.

Signed-off-by: Patrick McHardy <kaber@trash.net>
3cf93c96af7adf78542d45f8a27f0e5f8704409d 14-Apr-2008 Jan Engelhardt <jengelh@computergmbh.de> [NETFILTER]: annotate xtables targets with const and remove casts

Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
ba419aff2cda91680e5d4d3eeff95df49bd2edec 31-Jan-2008 Patrick McHardy <kaber@trash.net> [NETFILTER]: nf_conntrack: optimize __nf_conntrack_find()

Ignoring specific entries in __nf_conntrack_find() is only needed by NAT
for nf_conntrack_tuple_taken(). Remove it from __nf_conntrack_find()
and make nf_conntrack_tuple_taken() search the hash itself.

Saves 54 bytes of text in the hotpath on x86_64:

__nf_conntrack_find | -54 # 321 -> 267, # inlines: 3 -> 2, size inlines: 181 -> 127
nf_conntrack_tuple_taken | +305 # 15 -> 320, lexblocks: 0 -> 3, # inlines: 0 -> 3, size inlines: 0 -> 181
nf_conntrack_find_get | -2 # 90 -> 88
3 functions changed, 305 bytes added, 56 bytes removed, diff: +249

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
76507f69c44ed199a1a68086145398459e55835d 31-Jan-2008 Patrick McHardy <kaber@trash.net> [NETFILTER]: nf_conntrack: use RCU for conntrack hash

Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2ae15b64e6a1608c840c60df38e8e5eef7b2b8c3 15-Jan-2008 Jan Engelhardt <jengelh@computergmbh.de> [NETFILTER]: Update modules' descriptions

Updates the MODULE_DESCRIPTION() tags for all Netfilter modules,
actually describing what the module does and not just
"netfilter XYZ target".

Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
22c2d8bca212a655c120fd6617328ffa3480afad 18-Dec-2007 Jan Engelhardt <jengelh@computergmbh.de> [NETFILTER]: xt_connlimit: use the new union nf_inet_addr

Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
643a2c15a407faf08101a20e1a3461160711899d 18-Dec-2007 Jan Engelhardt <jengelh@computergmbh.de> [NETFILTER]: Introduce nf_inet_address

A few netfilter modules provide their own union of IPv4 and IPv6
address storage. Will unify that in this patch series.

(1/4): Rename union nf_conntrack_address to union nf_inet_addr and
move it to x_tables.h.

Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
d3c5ee6d545b5372fd525ebe16988a5b6efeceb0 05-Dec-2007 Jan Engelhardt <jengelh@computergmbh.de> [NETFILTER]: x_tables: consistent and unique symbol names

Give all Netfilter modules consistent and unique symbol names.

Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
ba5dc2756cc305c055dbb253b8fcdc459f0f8e73 06-Nov-2007 Jan Engelhardt <jengelh@computergmbh.de> [NETFILTER]: Copyright/Email update

Transfer all my copyright over to our company.

Signed-off-by: Jan Engelhardt <jengelh@computergmbh.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
a34c45896a723ee7b13128ac8bf564ea42fcd1eb 26-Jul-2007 Al Viro <viro@ftp.linux.org.uk> netfilter endian regressions

no real bugs, just misannotations cropping up

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
370786f9cfd430cb424f00ce4110e75bb1b95a19 15-Jul-2007 Jan Engelhardt <jengelh@gmx.de> [NETFILTER]: x_tables: add connlimit match

ipt_connlimit has been sitting in POM-NG for a long time.
Here is a new shiny xt_connlimit with:

* xtables'ified
* will request the layer3 module
(previously it hotdropped every packet when it was not loaded)
* fixed: there was a deadlock in case of an OOM condition
* support for any layer4 protocol (e.g. UDP/SCTP)
* using jhash, as suggested by Eric Dumazet
* ipv6 support

Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>