Show Navigation
Conversation
Notices
-
Looking at compile options for various things. Stuff like "sanity check - detect buffer overflows. Used for debugging. Slight overhead. Disable for production". IMO it's "disable for production" idiocy despite the minuscule perf. penalty that's responsible for a lot of breakage.
Brings to mind again C.A.R. Hoare's remarks about having lifejackets on in harbor but removing them on the open sea. Real world systems can have data that doesn't conform to the programmer's assumptions and break things, as shown by fuzzing. Leave the sanity checks in FFS. A little humility on the part of some programmers also wouldn't go amiss.
-
Sanity checks used specifically for debugging really do have costs that cannot be tolerated in practice. For example, the kinds of memory guards used to debug buffer overruns in sanitizers routinely add around a 2X cost in memory use and access time - hardly "miniscule".
You can scoff at programmers being silly for worrying about a 2X performance cost when it means the difference between 10 ns and 20 ns - in other words, completely unnoticeable by the user compared to the time they spend reading/inputting stuff. But you'd be pretty pissed off if your phone's battery only lasted half as long because of it.
-
I was referring to sanity checks selected at compile time that by the program's own documentation add only a small runtime overhead. In other words, negligible. It depends on the sanity check, of course - I'm not saying leave _all_ debugging code in.
(Anecdote - I've recently started using pycontracts and leaving the checks enabled. In one case I found that with checks in a heavily executed inner loop were causing significant slowdown, so once I'd debugged that bit, I disabled checking on the inner loop and checked the final result.)
Software is developed to a given quality level. The higher the level of quality or assurance demanded, the greater the effort and cost. Commercial development doesn't typically seek to remove all software defects - one typically sets a target number of bugs depending on the quality level required and the size of the program. Given that it is it's unrealistic to remove all defects without high cost and a real environment might subject a program to input it never received in test (especially when dealing with ratty data and poorly specified interfaces with third-party components), it doesn't seem sensible to me remove some "sanity checks" from production code, particularly when the runtime overhead is acceptable.