1<html><head><title>The design of toybox</title></head>
2<!--#include file="header.html" -->
3
4<b><h2>Design goals</h2></b>
5
6<p>Toybox should be simple, small, fast, and full featured.  Often, these
7things need to be balanced off against each other.  In general, keeping the
8code simple the most important (and hardest) goal, and small is slightly more
9important than fast. Features are the reason we write code in the first
10place but this has all been implemented before so if we can't do a better
11job why bother?  It should be possible to get 80% of the way to each goal
12before they really start to fight.</p>
13
14<p>Here they are in reverse order of importance:</p>
15
16<b><h3>Features</h3></b>
17
18<p>The <a href=roadmap.html>roadmap</a> has the list of features we're
19trying to implement, and the reasons for them. After the 1.0 release
20some of that material may get moved here.</p>
21
22<p>Some things are simply outside the scope of the project: even though
23posix defines commands for compiling and linking, we're not going to include
24a compiler or linker (and support for a potentially infinite number of hardware
25targets). And until somebody comes up with a ~30k ssh implementation, we're
26going to point you at dropbear or polarssl.</p>
27
28<p>Environmental dependencies are a type of complexity, so needing other
29packages to build or run is a big downside.  For example, we don't use curses
30when we can simply output ansi escape sequences and trust all terminal
31programs written in the past 30 years to be able to support them. (A common
32use case is to download a statically linked toybox binary to an arbitrary
33Linux system, and use it in an otherwise unknown environment; being
34self-contained helps support this.)</p>
35
36<b><h3>Speed</h3></b>
37
38<p>It's easy to say lots about optimizing for speed (which is why this section
39is so long), but at the same time it's the optimization we care the least about.
40The essence of speed is being as efficient as possible, which means doing as
41little work as possible.  A design that's small and simple gets you 90% of the
42way there, and most of the rest is either fine-tuning or more trouble than
43it's worth (and often actually counterproductive).  Still, here's some
44advice:</p>
45
46<p>First, understand the darn problem you're trying to solve.  You'd think
47I wouldn't have to say this, but I do.  Trying to find a faster sorting
48algorithm is no substitute for figuring out a way to skip the sorting step
49entirely.  The fastest way to do anything is not to have to do it at all,
50and _all_ optimization boils down to avoiding unnecessary work.</p>
51
52<p>Speed is easy to measure; there are dozens of profiling tools for Linux
53(although personally I find the "time" command a good starting place).
54Don't waste too much time trying to optimize something you can't measure,
55and there's no much point speeding up things you don't spend much time doing
56anyway.</p>
57
58<p>Understand the difference between throughput and latency.  Faster
59processors improve throughput, but don't always do much for latency.
60After 30 years of Moore's Law, most of the remaining problems are latency,
61not throughput.  (There are of course a few exceptions, like data compression
62code, encryption, rsync...)  Worry about throughput inside long-running
63loops, and worry about latency everywhere else.  (And don't worry too much
64about avoiding system calls or function calls or anything else in the name
65of speed unless you are in the middle of a tight loop that's you've already
66proven isn't running fast enough.)</p>
67
68<p>"Locality of reference" is generally nice, in all sorts of contexts.
69It's obvious that waiting for disk access is 1000x slower than doing stuff in
70RAM (and making the disk seek is 10x slower than sequential reads/writes),
71but it's just as true that a loop which stays in L1 cache is many times faster
72than a loop that has to wait for a DRAM fetch on each iteration.  Don't worry
73about whether "&" is faster than "%" until your executable loop stays in L1
74cache and the data access is fetching cache lines intelligently.  (To
75understand DRAM, L1, and L2 cache, read Hannibal's marvelous ram guide at Ars
76Technica:
77<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-2.html>part one</a>,
78<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part2-1.html>part two</a>,
79<a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part3-1.html>part three</a>,
80plus this
81<a href=http://arstechnica.com/articles/paedia/cpu/caching.ars/1>article on
82cacheing</a>, and this one on
83<a href=http://arstechnica.com/articles/paedia/cpu/bandwidth-latency.ars>bandwidth
84and latency</a>.
85And there's <a href=http://arstechnica.com/paedia/index.html>more where that came from</a>.)
86Running out of L1 cache can execute one instruction per clock cycle, going
87to L2 cache costs a dozen or so clock cycles, and waiting for a worst case dram
88fetch (round trip latency with a bank switch) can cost thousands of
89clock cycles.  (Historically, this disparity has gotten worse with time,
90just like the speed hit for swapping to disk.  These days, a _big_ L1 cache
91is 128k and a big L2 cache is a couple of megabytes.  A cheap low-power
92embedded processor may have 8k of L1 cache and no L2.)</p>
93
94<p>Learn how virtual memory and memory managment units work.  Don't touch
95memory you don't have to.  Even just reading memory evicts stuff from L1 and L2
96cache, which may have to be read back in later.  Writing memory can force the
97operating system to break copy-on-write, which allocates more memory.  (The
98memory returned by malloc() is only a virtual allocation, filled with lots of
99copy-on-write mappings of the zero page.  Actual physical pages get allocated
100when the copy-on-write gets broken by writing to the virtual page.  This
101is why checking the return value of malloc() isn't very useful anymore, it
102only detects running out of virtual memory, not physical memory.  Unless
103you're using a NOMMU system, where all bets are off.)</p>
104
105<p>Don't think that just because you don't have a swap file the system can't
106start swap thrashing: any file backed page (ala mmap) can be evicted, and
107there's a reason all running programs require an executable file (they're
108mmaped, and can be flushed back to disk when memory is short).  And long
109before that, disk cache gets reclaimed and has to be read back in.  When the
110operating system really can't free up any more pages it triggers the out of
111memory killer to free up pages by killing processes (the alternative is the
112entire OS freezing solid).  Modern operating systems seldom run out of
113memory gracefully.</p>
114
115<p>Also, it's better to be simple than clever.  Many people think that mmap()
116is faster than read() because it avoids a copy, but twiddling with the memory
117management is itself slow, and can cause unnecessary CPU cache flushes.  And
118if a read faults in dozens of pages sequentially, but your mmap iterates
119backwards through a file (causing lots of seeks, each of which your program
120blocks waiting for), the read can be many times faster.  On the other hand, the
121mmap can sometimes use less memory, since the memory provided by mmap
122comes from the page cache (allocated anyway), and it can be faster if you're
123doing a lot of different updates to the same area.  The moral?  Measure, then
124try to speed things up, and measure again to confirm it actually _did_ speed
125things up rather than made them worse.  (And understanding what's really going
126on underneath is a big help to making it happen faster.)</p>
127
128<p>In general, being simple is better than being clever.  Optimization
129strategies change with time.  For example, decades ago precalculating a table
130of results (for things like isdigit() or cosine(int degrees)) was clearly
131faster because processors were so slow.  Then processors got faster and grew
132math coprocessors, and calculating the value each time became faster than
133the table lookup (because the calculation fit in L1 cache but the lookup
134had to go out to DRAM).  Then cache sizes got bigger (the Pentium M has
1352 megabytes of L2 cache) and the table fit in cache, so the table became
136fast again...  Predicting how changes in hardware will affect your algorithm
137is difficult, and using ten year old optimization advice and produce
138laughably bad results.  But being simple and efficient is always going to
139give at least a reasonable result.</p>
140
141<p>The famous quote from Ken Thompson, "When in doubt, use brute force",
142applies to toybox.  Do the simple thing first, do as little of it as possible,
143and make sure it's right.  You can always speed it up later.</p>
144
145<b><h3>Size</h3></b>
146<p>Again, simple gives you most of this.  An algorithm that does less work
147is generally smaller.  Understand the problem, treat size as a cost, and
148get a good bang for the byte.</p>
149
150<p>Understand the difference between binary size, heap size, and stack size.
151Your binary is the executable file on disk, your heap is where malloc() memory
152lives, and your stack is where local variables (and function call return
153addresses) live.  Optimizing for binary size is generally good: executing
154fewer instructions makes your program run faster (and fits more of it in
155cache).  On embedded systems, binary size is especially precious because
156flash is expensive (and its successor, MRAM, even more so).  Small stack size
157is important for nommu systems because they have to preallocate their stack
158and can't make it bigger via page fault.  And everybody likes a small heap.</p>
159
160<p>Measure the right things.  Especially with modern optimizers, expecting
161something to be smaller is no guarantee it will be after the compiler's done
162with it.  Binary size isn't the most accurate indicator of the impact of a
163given change, because lots of things get combined and rounded during
164compilation and linking.  Matt Mackall's bloat-o-meter is a python script
165which compares two versions of a program, and shows size changes in each
166symbol (using the "nm" command behind the scenes).  To use this, run
167"make baseline" to build a baseline version to compare against, and
168then "make bloatometer" to compare that baseline version against the current
169code.</p>
170
171<p>Avoid special cases.  Whenever you see similar chunks of code in more than
172one place, it might be possible to combine them and have the users call shared
173code. (This is the most commonly cited trick, which doesn't make it easy. If
174seeing two lines of code do the same thing makes you slightly uncomfortable,
175you've got the right mindset.)</p>
176
177<p>Some specific advice: Using a char in place of an int when doing math
178produces significantly larger code on some platforms (notably arm),
179because each time the compiler has to emit code to convert it to int, do the
180math, and convert it back.  Bitfields have this problem on most platforms.
181Because of this, using char to index a for() loop is probably not a net win,
182although using char (or a bitfield) to store a value in a structure that's
183repeated hundreds of times can be a good tradeoff of binary size for heap
184space.</p>
185
186<b><h3>Simple</h3></b>
187
188<p>Complexity is a cost, just like code size or runtime speed. Treat it as
189a cost, and spend your complexity budget wisely. (Sometimes this means you
190can't afford a feature because it complicates the code too much to be
191worth it.)</p>
192
193<p>Simplicity has lots of benefits.  Simple code is easy to maintain, easy to
194port to new processors, easy to audit for security holes, and easy to
195understand.</p>
196
197<p>Simplicity itself can have subtle non-obvious aspects requiring a tradeoff
198between one kind of simplicity and another: simple for the computer to
199execute and simple for a human reader to understand aren't always the
200same thing. A compact and clever algorithm that does very little work may
201not be as easy to explain or understand as a larger more explicit version
202requiring more code, memory, and CPU time. When balancing these, err on the
203side of doing less work, but add comments describing how you
204could be more explicit.</p>
205
206<p>In general, comments are not a substitute for good code (or well chosen
207variable or function names). Commenting "x += y;" with "/* add y to x */"
208can actually detract from the program's readability. If you need to describe
209what the code is doing (rather than _why_ it's doing it), that means the
210code itself isn't very clear.</p>
211
212<p>Prioritizing simplicity tends to serve our other goals: simplifying code
213generally reduces its size (both in terms of binary size and runtime memory
214usage), and avoiding unnecessary work makes code run faster. Smaller code
215also tends to run faster on modern hardware due to CPU cacheing: fitting your
216code into L1 cache is great, and staying in L2 cache is still pretty good.</p>
217
218<p><a href=http://www.joelonsoftware.com/articles/fog0000000069.html>Joel
219Spolsky argues against throwing code out and starting over</a>, and he has
220good points: an existing debugged codebase contains a huge amount of baked
221in knowledge about strange real-world use cases that the designers didn't
222know about until users hit the bugs, and most of this knowledge is never
223explicitly stated anywhere except in the source code.</p>
224
225<p>That said, the Mythical Man-Month's "build one to throw away" advice points
226out that until you've solved the problem you don't properly understand it, and
227about the time you finish your first version is when you've finally figured
228out what you _should_ have done.  (The corrolary is that if you build one
229expecting to throw it away, you'll actually wind up throwing away two.  You
230don't understand the problem until you _have_ solved it.)</p>
231
232<p>Joel is talking about what closed source software can afford to do: Code
233that works and has been paid for is a corporate asset not lightly abandoned.
234Open source software can afford to re-implement code that works, over and
235over from scratch, for incremental gains.  Before toybox, the unix command line
236has already been reimplemented from scratch several times in a row (the
237original AT&amp;T Unix command line in assembly and then in C, the BSD
238versions, the GNU tools, BusyBox...) but maybe toybox can do a better job. :)</p>
239
240<p>P.S.  How could I resist linking to an article about
241<a href=http://blog.outer-court.com/archive/2005-08-24-n14.html>why
242programmers should strive to be lazy and dumb</a>?</p>
243
244<b><h2>Portability issues</h2></b>
245
246<b><h3>Platforms</h3></b>
247<p>Toybox should run on Android (all commands with musl-libc, as large a subset
248as practical with bionic), and every other hardware platform Linux runs on.
249Other posix/susv4 environments (perhaps MacOS X or newlib+libgloss) are vaguely
250interesting but only if they're easy to support; I'm not going to spend much
251effort on them.</p>
252
253<p>I don't do windows.</p>
254
255<b><h3>32/64 bit</h3></b>
256<p>Toybox should work on both 32 bit and 64 bit systems.  By the end of 2008
25764 bit hardware will be the new desktop standard, but 32 bit hardware will
258continue to be important in embedded devices for years to come.</p>
259
260<p>Toybox relies on the fact that on any Unix-like platform, pointer and long
261are always the same size (on both 32 and 64 bit).  Pointer and int are _not_
262the same size on 64 bit systems, but pointer and long are.</p>
263
264<p>This is guaranteed by the LP64 memory model, a Unix standard (which Linux
265and MacOS X both implement, and which modern 64 bit processors such as
266x86-64 were <a href=http://www.pagetable.com/?p=6>designed for</a>).  See
267<a href=http://www.unix.org/whitepapers/64bit.html>the LP64 standard</a> and
268<a href=http://www.unix.org/version2/whatsnew/lp64_wp.html>the LP64
269rationale</a> for details.</p>
270
271<p>Note that Windows doesn't work like this, and I don't care.
272<a href=http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx>The
273insane legacy reasons why this is broken on Windows are explained here.</a></p>
274
275<b><h3>Signedness of char</h3></b>
276<p>On platforms like x86, variables of type char default to unsigned.  On
277platforms like arm, char defaults to signed.  This difference can lead to
278subtle portability bugs, and to avoid them we specify which one we want by
279feeding the compiler -funsigned-char.</p>
280
281<p>The reason to pick "unsigned" is that way we're 8-bit clean by default.</p>
282
283<p><h3>Error messages and internationalization:</h3></p>
284<p>Error messages are extremely terse not just to save bytes, but because we
285don't use any sort of _("string") translation infrastructure.</p>
286
287<p>Thus "bad -A '%c'" is
288preferable to "Unrecognized address base '%c'", because a non-english speaker
289can see that -A was the problem, and with a ~20 word english vocabulary is
290more likely to know (or guess) "bad" than the longer message.</p>
291
292<p>The help text might someday have translated versions, and strerror()
293messages produced by perror_exit() and friends can be expected to be
294localized by libc. Our error functions also prepend the command name,
295which non-english speakers can presumably recognize already.</p>
296
297<p>An enventual goal is <a href=http://yarchive.net/comp/linux/utf8.html>UTF-8</a> support, although it isn't a priority for the
298first pass of each command. (All commands should at least be 8-bit clean.)</p>
299
300<p>Locale support isn't currently a goal; that's a presentation layer issue,
301X11 or Dalvik's problem.</p>
302
303<a name="codestyle" />
304<h2>Coding style</h2>
305
306<p>The real coding style holy wars are over things that don't matter
307(whitespace, indentation, curly bracket placement...) and thus have no
308obviously correct answer. As in academia, "the fighting is so vicious because
309the stakes are so small". That said, being consistent makes the code readable,
310so here's how to make toybox code look like other toybox code.</p>
311
312<p>Toybox source uses two spaces per indentation level, and wraps at 80
313columns. (Indentation of continuation lines is awkward no matter what
314you do, sometimes two spaces looks better, sometimes indenting to the
315contents of a parentheses looks better.)</p>
316
317<p>There's a space after C flow control statements that look like functions, so
318"if (blah)" instead of "if(blah)". (Note that sizeof is actually an
319operator, so we don't give it a space for the same reason ++ doesn't get
320one. Yeah, it doesn't need the parentheses either, but it gets them.
321These rules are mostly to make the code look consistent, and thus easier
322to read.) We also put a space around assignment operators (on both sides),
323so "int x = 0;".</p>
324
325<p>Blank lines (vertical whitespace) go between thoughts. "We were doing that,
326now we're doing this. (Not a hard and fast rule about _where_ it goes,
327but there should be some.)"</p>
328
329<p>Variable declarations go at the start of blocks, with a blank line between
330them and other code. Yes, c99 allows you to put them anywhere, but they're
331harder to find if you do that. If there's a large enough distance between
332the declaration and the code using it to make you uncomfortable, maybe the
333function's too big, or is there an if statement or something you can
334use as an excuse to start a new closer block?</p>
335
336<p>If statments with a single line body go on the same line if the result
337fits in 80 columns, on a second line if it doesn't. We usually only use
338curly brackets if we need to, either because the body is multiple lines or
339because we need to distinguish which if an else binds to. Curly brackets go
340on the same line as the test/loop statement. The exception to both cases is
341if the test part of an if statement is long enough to split into multiple
342lines, then we put the curly bracket on its own line afterwards (so it doesn't
343get lost in the multple line variably indented mess), and we put it there
344even if it's only grouping one line (because the indentation level is not
345providing clear information in that case).</p>
346
347<p>I.E.</p>
348
349<blockquote>
350<pre>
351if (thingy) thingy;
352else thingy;
353
354if (thingy) {
355  thingy;
356  thingy;
357} else thingy;
358
359if (blah blah blah...
360    && blah blah blah)
361{
362  thingy;
363}
364</pre></blockquote>
365
366<p>Gotos are allowed for error handling, and for breaking out of
367nested loops. In general, a goto should only jump forward (not back), and
368should either jump to the end of an outer loop, or to error handling code
369at the end of the function. Goto labels are never indented: they override the
370block structure of the file. Putting them at the left edge makes them easy
371to spot as overrides to the normal flow of control, which they are.</p>
372
373<p>When there's a shorter way to say something, we tend to do that for
374consistency. For example, we tend to say "*blah" instead of "blah[0]" unless
375we're referring to more than one element of blah. Similarly, NULL is
376really just 0 (and C will automatically typecast 0 to anything, except in
377varargs), "if (function() != NULL)" is the same as "if (function())",
378"x = (blah == NULL);" is "x = !blah;", and so on.</p>
379
380<p>The goal is to be
381concise, not cryptic: if you're worried about the code being hard to
382understand, splitting it to multiple steps on multiple lines is
383better than a NOP operation like "!= NULL". A common sign of trying to
384hard is nesting ? : three levels deep, sometimes if/else and a temporary
385variable is just plain easier to read. If you think you need a comment,
386you may be right.</p>
387
388<p>Comments are nice, but don't overdo it. Comments should explain _why_,
389not how. If the code doesn't make the how part obvious, that's a problem with
390the code. Sometimes choosing a better variable name is more revealing than a
391comment. Comments on their own line are better than comments on the end of
392lines, and they usually have a blank line before them. Most of toybox's
393comments are c99 style // single line comments, even when there's more than
394one of them. The /* multiline */ style is used at the start for the metadata,
395but not so much in the code itself. They don't nest cleanly, are easy to leave
396accidentally unterminated, need extra nonfunctional * to look right, and if
397you need _that_ much explanation maybe what you really need is a URL citation
398linking to a standards document? Long comments can fall out of sync with what
399the code is doing. Comments do not get regression tested. There's no such
400thing as self-documenting code (if nothing else, code with _no_ comments
401is a bit unfriendly to new readers), but "chocolate sauce isn't the answer
402to bad cooking" either. Don't use comments as a crutch to explain unclear
403code if the code can be fixed.</p>
404
405<!--#include file="footer.html" -->
406