History log of /external/squashfs-tools/squashfs-tools/mksquashfs.h
Revision Date Author Comments (<<< Hide modified files) (Show modified files >>>)
e19ce45ed7ae61bf318bb967a7076c8193e771d7 27-Jul-2014 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: move symlink reading from create_inode() to dir_scan1()

Up till now Mksquashfs did not read the contents of symlinks at
directory scanning, but left this task to the final filesystem
creation when the symlink inode in the output squashfs filesystem
was created.

Now that we're adding action test operations that operate on symlinks,
this creates a problem. We want the values of the symlink when
evaluating the symlink for existence, recursive evaluation via
readlink() etc. Up till now when implementing these tests I have
chosen to read the symlink from the source filesystem on demand and
then discard the value. The over-arching reason for this is because
symlinks can reference other symlinks, and because these tests were
previously designed to be evaluated at exclude action time when the
directory structure has not been fully scanned, we need to deal with
symlinks that have not yet been scanned. In other words moving
symlimk reading to directory scan time is no help when evaluating
synlinks at exclude time because there is no guarantee that any
referenced symlinks have been read. You have no option but to fall back
to reading symlinks from the source filesystem at symlink test
evaluation time.

But evaluating symlinks by reading from the source filesystem is fraught
with difficulties, not least the fact that the existence of the
symlink in the source filesystem is no guarantee that the symlink exists
in the output filesystem. Additionally there is never any guarantee
that the source filesystem hasn't changed whilst evaluating it rendering
any checks meaningless.

The fix to this has been to decide evaluating symlinks at exclude time
when the directory stucture has not been fully scanned causes
insurmountable problems. The solution to this has been to introduce
a new prune action, which is evaluated on the fully scanned
directory structure. This alleviates all the aforementioned
problems.

So, now that the prune action has been implemented, this checkin
moves reading of symlinks to the dir scanning phase, so that the
snapshotted values are available for the symlink test operations.

An additional minor improvement is that failure to read the symlink
for some reason is discovered at dir scanning time allowing Mksquashfs
to ignore the symlink. Previously because reading the symlink was
performed at filesystem creation time, failure to read meant a
dummy empty symlink had to be created. This is because at
filesystem creation all the metadata for the filesystem has been
computed and partially written, and this includes the count
of the number of inodes.

This is called a "minor improvement" because in practice this
situation never occurs because due to the nature of symlinks if Mksquashfs
could stat it at dir scanning time, then it is guaranteed to
be able to read it at filesystem creation time, unless the symlink
has been deleted in the meantime.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
c3af83a34d8ccf72278d4ee824f3d57f31b79ce1 21-Apr-2014 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: rewrite default queue size code so it is based on physical memory

For many years Mksquashfs has used fixed size default queues.
This has made choosing a *usable* default set of queue sizes impossible
because, obviously, I have no idea what size physical memory
people have in their machines, or what what amount of memory they
want Mksquashfs to use by default.

Choose a too large set of queues, and this means Mksquashfs won't
be able to be used on machines with smaller RAM, and/or it
will go into swap and suffer extremely poor performance.

Choose a too small set of queues, and we don't maximise Mksquashfs
performance by using the available RAM on machines with lots
of memory.

My original decision was to choose an extremely conservative set of
queues, only 64Mbytes for Read-Queue and Fragment-Queue, and 512Mbytes
for the Write-Queue which because the writer thread is never the
bottleneck is not expected to grow beyond ~20% of the maximum. So
a total queue size of less than 256Mbytes, which was conservative
when I made my estimate of the "usable RAM in a typical machine" in
2006 where I anticipated the typical machine might have 2Gbytes of RAM.
It is now rediculously small.

My expectation was most people would see that the defaults were
(deliberately) made small, and use the -XXX-queue options to increase
the queue sizes to something more usable. This AFAIK has not
happened. People just run Mksquashfs, and as if by magic, they expect
Mksquashfs to just use the right amount of memory.

One reason for this I suspect, is that I made a major mistake in
presenting 3 raw queue-size options to the user. Most people have
no idea what the queues are, and have no idea what the right ratio is
between the 3 queue sizes... and obviously think changing them
"ramdomly" will break Mksquashfs and just leave well alone.

The moral of the story is if you expect/hope people will increase the
memory used by Mksquashfs, then make sure you give them options
they will understand.... and if they don't change the options make
sure your default memory size is at least sensible for their
machine. I failed twice last time.

This commit is step 2. Base the default queue sizes on the amount of
physical memory. In this I have choosen to use a 1/4 of physical memory.
Any more than this and we can't guarantee we'll hit memory pressure on a
loaded machine, anything less than this, and we get back to the silly
amount of memory usage issue.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
943acada7a60a048b4be53a4df1e94e8b10e08a6 17-Apr-2014 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: fix a potential non-default option deadlock

Fix a potential deadlock in Mksquashfs that may be triggerable using
non-default options.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
e3e69fc4edf5760932422f284edb7460221f0611 10-Apr-2014 Phillip Lougher <phillip@squashfs.org.uk> caches-queues-lists: dump reader thread -> process fragment threads queue

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
8bb17b0275fa35318ad35c8fd477023004f940aa 31-Mar-2014 Phillip Lougher <phillip@squashfs.org.uk> Mksquashfs: significantly optimise fragment duplicate checking

Remove the last remaining parallelisation bottleneck in
Mksquashfs - fragment duplicate checking which was previously done
on the single main thread.

Back in 2006 when I first parallelised Mksquashfs, doing this on the
main thread was initially not considered to be too much
of an issue. If you don't have (m)any duplicates then
you avoid the issue full stop. But even when you do have fragments
which need to be checked, the necessary work of memcmp (memory compare)
is not too arduous and is much faster than the upstream file reader thread,
and much much faster than the downstream fragment compressor thread(s),
and so if Mksquashfs is running slow then this is "not the bottleneck you're
looking for", that's going to either be fragment compression or
file reading. This is on the basis most duplicates are local and the
fragment referenced can be found in the fragment cache.

But often when I ran Mksquashfs and had a lot of duplicates the
performance of Mksquashfs would be disappointing, normally without
duplicates, I expected to get full processor utilisation, but with
duplicates you might get roughly 200% or even 100% (i.e. one processor
core), at least for the time it was hitting a run of duplicates in
the source filesystem. Increasing the size of the fragment
cache would reduce the performance hit. Which gave a substantial
hint the problem was fragment cache misses which caused fragment
blocks to be read back off disk and decompressed on the single
main thread. But it was evident that wasn't the whole story.

The culprit has always self-evidently been the single threaded
duplicate checking on the main thread, this has been apparent almost
since the initial parallelisation of Mksquashfs in 2006, but although
I've had my suspicions as to why (the hint above), with the
demands/prioritisation of extra functionality, this has remained on my
TODO list until now.

Analysis now has shown the problem to be a triple whammy:

1. With duplicates (and even without), there are substantial fragment
cache misses, which make the single main thread spend a lot of the
time duplicate checking reading in fragment blocks off disk,
decompressing them, and then memcmp'ing them for a match. This is
because with a large filesystem, many fragments match at the
checksum level even though they're not actually a match at the byte
level - the checksums eliminate most files, but if you've got a large
filesystem that still leaves multiple files which match, and this
match is random, and does not follow locality of reference. So
invariably these fragment blocks are no longer in the fragment
cache (if you're compressing 1Gbyte of files and have a 64Mbyte
(default) fragment cache, most checksum matches will invariably
not be in the cache, because they do not follow the "locality of
reference rules", the checksum matches can literally be anywhere
in the part of the filesystem already compressed and written to disk).
The checksum matches in theory could be reduced by improving the
discriminating power of the checksums, but this is a zero sum
game, the extra processing overhead of computing a more sophisticated
checksum for *all* blocks would easily outweigh the benefits of
less checksum matches.

2. Even with the knowledge the main thread spends a lot of the
time reading in and decompressing fragment blocks, we're left with
the fact the main thread has enough "bandwidth" to do this without
becoming a bottleneck, so there's more to the story.

The "more to the story" is that the main thread spends most of its
time asleep! As fragment compression is the bottleneck in any
Mksquashfs run, we run out of "empty fragment blocks" because all
of the fragment blocks become filled, and get queued on the
fragment compression threads waiting for them to be compressed. So
the main thread sleeps waiting for an "empty fragment block" even
though it has a queue of files which it could be duplicate checking.

3. When the main thread does wake up having got an "empty fragment block"
and it starts to do duplicate checking, if that duplicate checking takes
a long time (because it has to read in fragment blocks and decompress
them), then it 1. stops passing fragments to the fragment decompressor
threads, and 2. stops taking fragments from the reader thread... So
both the fragment compressor threads and the reader thread starve.

Now, because both the reader thread and the fragment compressor threads
have deep queues, this doesn't happen instantaenously, but only if
the main thread hits a run of files which need multiple fragment
blocks to be read off disk, and decompressed. Unfortunately, that
*does* happen.

So, we end up with the situation the main thread doesn't duplicate
check files ahead of time because it is blocked on the fragment
compressor threads. When it does wake up and do duplicate checking
(because it didn't do it ahead of time), it ends up starving the
fragment compressor threads and reader thread for that duration -
hence we get a CPU utilisation of 100% *or less* because only
that main thread is running.

The solution is to move duplicate checking to multiple
one per-core front end processing threads ahead of the main thread
(interposed between the reader thread and the main thread). So
the front-end threads do duplicate checking on behalf of the
main thread. This eliminates the main thread bottleneck at a stroke,
because the front-end threads can duplicate check ahead of time,
even though the main thread is blocked on the fragment
compressors.

In theory simple, in practice extremely difficult. Two issues have
to be dealt with:

1. It introduces a level of fragment cache synchronisation hitherto
avoided due to clever design in Mksquashfs. Mksquashfs parallelisation
is coded on the producer-consumer principle. The producer thread
creates buffers in the cache, fills them in, and then passes them
to the consumer thread via a queue, the consumer thread only "sees"
the buffers when they're read from the queue, at which time the
consumer and producer has inherently synchronised, because the
consumer only gets them once the producer thread has explicitly done
with the buffer. This technique AFAIK was introduced in CSP
(communicating sequential processes) and was adopted in the largely
forgotten about descendant OCCAM. This technique eliminates
explicit buffer locking.

The front-end threads break this model because we get multiple
threads opportunistically looking up fragments in the fragment
cache, and then creating them if they're not available. So we
get the problem threads can lookup buffers and get them whilst
they're still being filled in, and we get races where two
threads can simultaneously create the same buffers. This can,
obviously, be dealt with by introducing the concept of "locked"
buffers etc. but it means adding an additional set of cache APIs
only for the fragment processing threads.

2. The front-end threads have to synchronise with the main thread
to do duplicate checking. At the time the front-end threads
do duplicate checking, there may exist no duplicates, but the
duplicate may exist being duplicate checked itself. Think of the
case where we have two files alphabetically one after another, say
"a" and "b", "a" goes to front-end thread 1, and "b" goes to
front-end thread 2, at this time neither file "exists" because it's
being duplicate checked, thread 2 cannot determine file "b" is
a duplicate of file "a" because it doesn't "exist" at this time.

This has to be done without introducing an inherent synchronisation
point on the main thread, which will only reintroduce the main thread
bottleneck "by the back door".

But is actually more complex than that. There are two additional
points where synchronisation with the main thread "by the back door"
has to be avoided to get optimum performance. But, you'll have to look
at the code because this commit entry is too long as it is.

But, the upshot of this improvement, is Mksquashfs speeds up by 10% -
60% depemding on the ratio of duplicate files to non-duplicate
files in the source filesystem, which is a significant improvement.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
b4fc3bf1abd7cf8d65cc68ad742a254d76f5da09 06-Feb-2014 Phillip Lougher <phillip@squashfs.org.uk> Mksquashfs: optimise duplicate checking when appending

When Mksquashfs is appending, the checksums for the fragments
in the original filesystem are lazily computed on demand - when
we get a possible duplicate where the possible duplicate fragment
is stored in a fragment block in the original filesystem, it is
read off disk, decompressed, and the fragment checksum computed.

This allows fast startup as the fragment checksums are not computed
upfront, and of course if we never get a possible duplicate stored
in a particular original fragment block, it is never read off disk.

But, when we read the original fragment block off disk, we compute
the checksum for the fragment of interest only, and then store
the fragment block in the fragment cache.

If we get a reference to another fragment stored in that
fragment block, and the fragment block is still in the cache, then
we re-use it. If on the other hand, the fragment block has been
aged out of the cache, we have to re-read and re-decompress
the same fragment block. This adds uncessary overhead, and in
certain filesystems, where there's lots of duplicate matches
but distributed in time, we may end up re-reading the same
fragment blocks many times.

This commit adds an optimisation, when we read a fragment block off
disk, we compute the fragment checksums for *all* the fragments
stored in the fragment block. This avoids the situation where we
have to re-read the fragment block off disk if the fragment block
disappears from the fragment cache before we get another
duplicate check reference to that fragment block.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
ce2fef5e9d6e31648e93bda430fee37f0929c583 05-Feb-2014 Phillip Lougher <phillip@squashfs.org.uk> Mksquashfs: move a couple of things into mksquashfs.h

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
cf478e9fddfa332b91922ac1c626c5f0d4e65c74 29-May-2013 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: rename from_reader queue to to_deflate

The from_reader queue used to collect all the data queued from
the reader thread (hence its name). Data from the reader thread
is now queued to the to_main queue (uncompressed fragments etc.) and
to the from_reader queue, which is now soley used to queue
blocks to the deflator thread(s). So rename to to_deflate which
now more accurately reflects how it is used.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
fb4a676b627f70e68a315c2c10be7c0fe53428a6 25-May-2013 Phillip Lougher <phillip@squashfs.org.uk> info: add locked fragment queue to dump_state()

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
0e1656d12ed8573d94f4c2735134aeb7c1a8f5b2 20-May-2013 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: change queue name from "from_deflate" to "to_main"

The queue now queues buffers sent from both the reader thread and
the deflate thread to the main thread, so the name "from_deflate"
is wrong, change to "to_main" which correctly reflects the queue
usage.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
1765698880f0a44f4ea1c6187fb3ffbd7e79a08d 11-May-2013 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: replace generic queue with specialised "sequential queue"

Replace the use of a generic queue and associated code in mksquashfs.c
to re-order out of order buffers (see previous commit) received from
the deflate threads (and reader thread) with a specialised
"sequential queue" that guarantees buffers are delivered in the
order specified in the "sequence" field, and which minimises
unnecessary wake-ups.

It will also ensure that pending queued buffers are held in the
queue rather than being "popped" off and held invisibly in a
structure private to mksquashfs.c. This will also ensure a more
accurate display of queue status when in the queue and cache
dump (generated if control \ is hit twice within one second).

Currently queue status dumping of the seq_queue isn't implemented
so comment that out for the time being.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
7538d74f6fbefc11f20fe33ab75d5f197b2dee5c 22-Apr-2013 Phillip Lougher <phillip@squashfs.org.uk> info: add initial code to dump queue state when sent SIGHUP

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
ef15e43b707f22d64c13e9b2ca0eae8b165a1690 19-Apr-2013 Phillip Lougher <phillip@squashfs.org.uk> Move SQUASHFS_LEXX and SQUASHFS_SWAP_{SHORTS:INTS:LONGS} into squashfs_swap.h

Plus move definition of SQUASHFS_MEMCPY as well into squashfs_swap.h

Now that the separate implementations in mksquashfs.h and read_fs.h
are identical we can have one implementation, and we can
put it with the other swap macros.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
028d9c5b159d1f5bfc0ce539fa0d806f24df09f3 18-Apr-2013 Phillip Lougher <phillip@squashfs.org.uk> Fix email address

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
609ef147b3688e8da073de1f4827125ee36061d3 25-Oct-2012 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs.h: #endif in wrong place

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
8db6d288d01afc5464965b0bde8106cd7360c543 16-Oct-2012 Phillip Lougher <phillip@squashfs.org.uk> Update copyright dates

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
23d8362e9e35a564182c713640fa04f31e9c08fb 14-Oct-2012 Phillip Lougher <phillip@squashfs.org.uk> actions: add move() action

Add move action allowing files and directories to be moved
and/or renamed.

Syntax -action "move(DEST)@<test_ops>"

Semantics are the same as Unix mv command.

Rename SOURCE to DEST (if it doesn't exist) or
Move SOURCE to DEST (if it exists and is a directory)

SOURCE is any file matching the test_ops expression.

For instance

-action "move(/source)@name(*.[ch])"

move all *.[ch] files to the directory "/source"

More complex combinations of test_ops are supported by default.

For instance

Move all files not owned by monitor smaller than 1024 bytes to "/smallfiles"
but ignore any files matching "*.[ch]"

-action "move(/smallfiles)@ !uid(monitor) && size(-1024) && !name(*.[ch])"

Multiple moves are evaluated "simultaneously", so the results of one
move do not affect other moves.

This is to prevent unanticipated side-effect.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
bf33836421997fde91f7a0c775de11f55c41cb01 22-Aug-2012 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: Use a linked list to store directory entries rather than an array

When scanning and storing the source directories use a
linked list to store the directory entries rather than an array.

This change is to more efficiently handle the empty action and the
soon to be implemented move action. The empty action removes
directory entries and the move action may move directory entries
from one directory to another. These actions are rather messy to
implement with arrays requiring any holes resulting from deleting/moving
to be filled by shuffling array entries down to fill the hole.
Linked lists allow entries to be deleted/moved much easier, which
is not only a performance improvement but results in more
aesthetically pleasing code.

Arrays were previously used only because the qsort function requires
arrays. The move to linked lists has improved the code and this is
an additional unexpected bonus, which is in addition to the previously
mentioned reasons to move to linked lists.

Qsort is stil used in this commit, and so code has been added to
convert the linked-list into an array before calling qsort, and code
has also been added to convert the sorted array into a linked-list.
This is messy, but only temporary, to ensure this commit doesn't
break mksquashfs. The next commit will remove qsort and will instead
add a merge sort which can efficiently sort linked lists
( O(n log n) which is the same complexity as qsort).

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
b38c172137d7eed57a0f8d1af69eb65aaecd8916 10-Feb-2012 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: add subpath to dir_info structure

Rather than pathname which includes the path specified on the
command line, add a subpath which is rooted at the root of the
squashfs filesystem being created.

i.e. given mksquashfs a/b file.sqsh

with file "c/d/e" inside directory "b"

the pathname of "d" will be "a/b/c/d"

this adds a subpath which will be "c/d"

This is necessary for the new pathname test for actions.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
494479fd097eaab956dda035c45efadc151cadc0 03-Feb-2012 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: optimise dir_ent structure by removing pathname member

All dir_ent structures stored both the name and pathname
of each file scanned from the source directories. This was
rather wasteful of memory, as the majority of pathnames could
be computed from the parent directory's pathname and the
dir_ent's name.

Exceptions to this are names which have had a _1, _2 etc. appended due
to name clashes, pseudo files and cases where multiple directories
and files have been specified on the mksquashfs command line, in which
case the root directory will likely consist of entries each with a
different pathname. These cases have to be handled specially, either
with the new source_name member holding the original name, or in the
cases where there's a completely different pathname, by the use of the
nonstandard_pathname member.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
ad0f92110583675fb115b0f0a5a3155e91aa903c 31-Dec-2011 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: Count the number of excluded files in each directory

This allows you to distinguish between directories originally empty,
or directories empty due to files being excluded.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
0b5a1248da1478368e3ba669fea05c1ea5e927fd 25-Dec-2011 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: track directory depth

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
63f531f11ecd5ed3c3a1a9b354a8555d0e180bb3 10-Sep-2011 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: Make noD and noF inode specific

This allows them to be changed by actions on
an inode specific basis.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
9b6e3416e7e61df3cf1114a0d374198aab53b5c7 05-Sep-2011 Phillip Lougher <phillip@squashfs.org.uk> mksquashfs: Make no_fragments and always_use_fragments inode specific

This allows them to be changed by actions on
an inode specific basis.

Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
/external/squashfs-tools/squashfs-tools/mksquashfs.h
860c1f3d8aa4ba40d587382a91821bea03b023c5 11-Aug-2010 plougher <plougher> Make XATTR support conditionally compilable. This is to support platforms
and c libraries that lack xattr support.
/external/squashfs-tools/squashfs-tools/mksquashfs.h
8d4404d1f63a558f4903eb8c939bd4306a805d0f 21-Jul-2010 plougher <plougher> Update copyright
/external/squashfs-tools/squashfs-tools/mksquashfs.h
570f436c85a99435180a3ec9aeb1c94135ab0e77 17-Jun-2010 plougher <plougher> Add support for reading xattrs in append
/external/squashfs-tools/squashfs-tools/mksquashfs.h
e6e0e1bdf98ad6faa63527e5bbdd3bd5e7e97a9e 12-May-2010 plougher <plougher> Add support for xattrs. File system can store up to 2^48 compressed
bytes of xattr data, and the number of xattrs per inode is unlimited.
Each xattr value can be up to 4 Gbytes. Xattrs are supported for
files, directories, device nodes and symbolic links.
/external/squashfs-tools/squashfs-tools/mksquashfs.h
b3977eb8ba88ea2f0b3b0d37be293240b07584d2 02-May-2010 plougher <plougher> Hide how pseudo files are marked as such in inode structure.
/external/squashfs-tools/squashfs-tools/mksquashfs.h
b85e9ad343d3ff4e2c9aaafcd73931aca321de7c 02-May-2010 plougher <plougher> Add more information to pseudo_file flag, allowing both pseudo process files
and other pseudo files to be identified.
/external/squashfs-tools/squashfs-tools/mksquashfs.h
f5456bd7c43f1da21d6206d0137a551f79627687 01-May-2010 plougher <plougher> Move some definitions from sort.h to mksquashfs.h
/external/squashfs-tools/squashfs-tools/mksquashfs.h
5ec47ee1a582f32b039653ac6fec904afafd465a 18-Mar-2010 plougher <plougher> Fix swapping code following alignment fixes
/external/squashfs-tools/squashfs-tools/mksquashfs.h
20c76e1f4cc4dd9c992499c1dcbfad59714dad7e 21-Feb-2009 plougher <plougher> Add inswap macros and make macros conditionally compiled
/external/squashfs-tools/squashfs-tools/mksquashfs.h
9572718925d0bf4cd5a28c77af9bb639fc28cc96 26-Jan-2009 plougher <plougher> New macro
/external/squashfs-tools/squashfs-tools/mksquashfs.h
70a9f3f2260f37a9cf98d435fcd35e443dd13c09 26-Jan-2009 plougher <plougher> Remove old packed structure swapping routines. Replace with new stuff
to swap 4.0 filesystems on big-endian archictures
/external/squashfs-tools/squashfs-tools/mksquashfs.h
02bc3bcabf2b219f63961f07293b83629948f026 25-Feb-2007 plougher <plougher> updated mksquashfs to 3.2-r2
/external/squashfs-tools/squashfs-tools/mksquashfs.h
769592935b09063a5493ebeb530c599753ece5d2 24-Jan-2006 plougher <plougher> Updated release date and copyright information
/external/squashfs-tools/squashfs-tools/mksquashfs.h
c2759614989e149db2bad2e8845b4d9f9123cf89 23-Nov-2005 plougher <plougher> Updated email address and copyright dates.
/external/squashfs-tools/squashfs-tools/mksquashfs.h
1f413c84d736495fd61ff05ebe52c3a01a4d95c2 18-Nov-2005 plougher <plougher> Initial revision
/external/squashfs-tools/squashfs-tools/mksquashfs.h