Offside!

Shipboard

 

A friend from college found my blog, and to my delight made some suggestions. I had to promise, though, to include a diatribe against “offside-rule” languages, scripting, and automatic memory allocation. I may never again get a job writing Python or Go applications, but here I go…

Offside-rule languages, such as Python and F#, use whitespace indentation to delimit blocks of statements. Its a nice clean syntax and a maintenance nightmare. I would have suffered less in my life without the hours spent deciphering the change on logic that cutting and pasting code between different indentation levels.  It’s especially bad when you’re trying to find that change in logic that someone else induced with their indentation error.

Taking it to an extreme the humorous people Edwin Brady and Chris Morris at the University of Durham created the language of Whitespace (https://en.wikipedia.org/wiki/Whitespace_(programming_language) (the wikipedia tag is prettier than the official page which only seems available on the Wayback Machine (http://archive.org/web/).

For full disclosure, I do use Python when I’m playing around with Project Euler (https://projecteuler.net/). It is the ideal language for quick number theory problems.  In a professional context Python has proven to be a nightmare starting with the compiler crashing with segmentation faults on what I thought were simple constructs, lack of asynchronous and multi-threaded features (try implementing an interactive read with a timeout, or fetching the standard and error output from a child process).  Complete the nightmare with a lack of compatibility between Python releases.

How To Get a Legacy Project Under Test.

You’re smart, so I’ll just give the outline and let you fill in the blanks:

0.  Given: you have a project of 300K to millions of lines of code largely without tests.

1.  Look at your source control and find the areas undergoing the most change.  Use StatSVN’s heatmap with Subversion  With Perforce, just look at the revision numbers of the files to detect the files undergoing the most change. With git, use gource or StatGit.  The areas under the most change are the areas you want to refactor first.

2.  In your chosen area of code, look at the dependencies.  Go to the leaves of the dependency tree of just that section of code.  Create mock function replacements for system functions and other external APIs, like databases and file i/o, that the leaf routine use.

3.  Even at this level, you’ll find circular dependencies and compilation units dependent on dozens of header files and libraries.  Create dummy replacements for some of your headers that aren’t essential to your test.  Use macro definitions to replace functions — use every trick in the book to get just what you want under test.   Notice so far you haven’t actually changed any of the code you’re supposed to fix.  You may spend a week or weeks to get to this point dependency on the spaghetti factor of the code.  Compromise a little — such as don’t worry about how to simulate an out-of-memory condition at first.  Hopefully you’ll start reaching a critical mass where it gets easier and easier to write tests against your code base.

4.  Now you get to refactor.   Follow the Law of Demeter.  Avoid “train wrecks” of expressions where you use more than one dot or arrow to get at something.  Don’t pass all of object when all it needs is a member.    This step will change the interfaces of your leaf routines, so you’ll need to go up one level in the dependency tree and refactor that — so rinse and repeat at step 3.

5.  At each step in the process, keep adding to your testing infrastructure.  Use coverage analysis to work towards 100% s-path coverage (not just lines or functions).  Accept you’re not going get everything at first.

What does this buy you?    You can now add features and modify the code with impunity because you have tests for that code.  You’ll find the rate of change due to bug fixes disappears to be replaced with changes for new salable features.

On the couple of projects where I applied this methodology the customer escalation rate due to bugs  went from thousands a month to zero.  I have never seen a bug submitted against code covered with unit tests.

The Encryption Wars aren’t Over Yet

Remember the Clipper Chip? It was Al Gore’s approved encryption chip that the government wanted to insert into every digital communications device that would allow the government to eavesdrop on criminals and everyone else’s conversations with a court order. The Clipper Chip finally faded away because of lack of public adoption and the rise of other types of encryption not under government control.  We never did resolve the debate on whether the government should even be trying to do that sort of eavesdropping.

Sink_Clipper_campaign

Now the government is back at it again. The Burr-Feinstein Bill (https://assets.documentcloud.org/documents/2797124/Burr-Feinstein-Encryption-Bill-Discussion-Draft.pdf) proposes to criminalize people like me who refuse to aid the government in hacking into a phone.  Australia, the United Kingdom, Canada, and other countries already have similar laws.  The UK has already sentenced several people to prison for not revealing encryption keys.

Fortunately at the moment the information locked inside my own head is not accessible to the government or organized criminals.  Once I write some notes down on my tablet, though, even though my tablet is encrypted, the government can force someone else to hack my tablet.   If my own government can do it, then presumably organized crime and foreign governments can also do it.  In the aforementioned countries, they don’t even need to hack.  They will send me to jail if I fail to reveal my encryption keys.

Now as I am not a dissident nor a cybercriminal, I don’t really have much to worry from the government — but I do buy things online, and I do some banking online.  I also sometimes negotiate for contracts with the government.  In other words, I have lots of legitimate information I want to keep private, even from the government — and that’s on a good day.  Imagine the problems I would have if I were a dissident (such as a Republican GM car dealer).

If the government actually acted responsibly all of the time, perhaps we wouldn’t have much to worry about.  We live in a harsher world than that, though.  A small minority of officials are corrupt, cybercriminals, and terrorist organizations, and foreign agencies will attempt to exploit the same loopholes our government has coerced.

The U.S. position will have consequences.  Nations that value privacy and the rights of their citizens will refuse to do cyber business with U.S. companies, and the beacon of democracy will shine from some other shore.  Our economy will begin to revert to pre-internet days as people lose more trust in the net.  If the government can break into your phone, then a well-healed terrorist organization can break into a power plant operator’s phone, steal his keys, and gain control of the power plant.  That’s just one example.

Compromise is not possible.  The problem is too big.  If you make a phone with a backdoor, then all phones of the same model and version are equally vulnerable.  No one will buy a U.S. designed phone.  If you break into one, then you can break into them all.

Given anyone with a little sense of operational security is not going to put anything on a phone more sensitive than a grocery list, any claim a phone might have value in an investigation is just a fishing expedition. Even if the phone belongs to a terrorist or a child pornographer, we must treat it as a brick. Breaking into a phone renders at least that version of the phone vulnerable for everybody with the same type of phone.

Everyone should e-mail Senators Feinstein and Burr and tell them that the new encryption laws compromise our freedoms.  This is so serious that this law places us on the edge of a new Dark Age.  I mourn that the United States is the agent of this dimming of the light of liberty.

Everyone needs to get their own encryption key.  Don’t depend on the one in your phone or tablet.  Comodo.com offers free e-mail certificates.  Of course, Comodo is generating the private key, so if the government coerces them to save the key its actually worse than having no key, but it is a start.  Just get started on your own encryption and signing.  If everyone digitally signs their e-mail then its easy to filter spam.

Graduate to the next level and generate your own PGP key, and upload it to one of the public key servers.  You’ll need to get an e-mail client that understands PGP keys but you’ll have absolute security.  I use Mynigma on a Mac.  Get it from the Apple Appstore. Get started in this and learn about PGP keys before your government makes it illegal.

I wanted this to be a coding blog, but this encryption issue is one of the most important technical issues of our entire civilization.  As a coder, you can do your utmost to

  •  Write secure code.  Know the CERT coding guidelines.
    You can’t add security after the fact.  Firewalls, WAFs and the like are just security theater.
  • Always use a secure protocol on external interfaces.
  • Sign your code.
  • Sign your email.
  • Encrypt your storage.

Everyone tests. Test everything. Use unit tests.

Over the past 40 years I’ve noted that every project with a large QA staff was a project in trouble. Developers wrote code and tossed it over the fence for QA to test. QA would find thousands of defects and the developers would fix hundreds. We shipped with hundreds of known defects. After a few years the bug database would have tens of thousands of open bugs — which no one had time to go over to determine if they were still relevant. The bug database was a graveyard.

Fortunately I’ve had the joy and privilege of working on a few projects where everyone tests. I think those projects saved my sanity. At least I think I’m sane. In those test oriented projects we still had a small QA department, but largely they checked that we did the tests, and sometimes they built the infrastructure for the rest of us to use in writing our own tests. Probably even more importantly, the QA people were treated as first class engineers, reinforced by every engineer periodically took a turn in QA. In those test oriented projects we detected even more bugs than the big QA department projects, but shipped with only a handful of really minor bugs. By minor, I mean of the type where someone objected to a blue colored button, but we didn’t want to spend the effort to make the button color configurable. Because the developers detected the bugs as they wrote the code, they fixed the bugs as they occurred. Instead of tens of thousands of open bugs, we had a half dozen open bugs.

Testing as close as possible to writing of the code, using the tests to help you write the code, is much more effective than the classic throw it over the fence to the QA department style. On projects with hundreds of thousands of lines of code, the large QA departments generally run a backlog of tens of thousands of defects, while the test-driven projects with the same size code base, run a backlog of a couple of bugs.

This observation deserves its own rule of thumb:

A project with a large QA department is a project in trouble.

Almost everyone has heard of test driven development, but few actually understand unit test. A unit test isn’t just a test of a small section of code — you use a unit test while you write the code. As such it won’t have access to the files, network, databases, of the production or test systems. Your unit tests probably won’t even have access to many of the libraries that other developers are writing concurrently with your module. A classic unit test runs just after you compile and link with just what you have on your development machine.

This means that if your module makes reference to a file or data database or anything else that isn’t in your development environment, you’ll need to provide a substitute.

If you’re writing code from scratch, getting everything under test is easy. Just obey the Law of Demeter( http://www.ccs.neu.edu/home/lieber/LoD.html ). The Law of Demeter, aka the single dot rule, aka Principle of Least Knowledge, helps ensure that the module you’re writing behaves well in changing contexts. You can pull it out of its current context and use it elsewhere. Just an important, it doesn’t matter what the rest of the application is doing (unless the application just stomps on your module’s memory), your module will still behave correctly.

The Law of Demeter says that your method or function of a class can only refer to variables and functions defined within the function, or to its class or super class, or passed into it via its argument list. This gives you a built-in advantage of managing your dependencies. Everything your function needs can be replaced so writing unit tests becomes easy.

Take a look at these example classes:

class ExampleParent {
protected:
    void methodFromParentClass(const *arg);
};


class ExampleClass : public ExampleParent {
public:
    void method(const char *arg, const Animals &animal);

    std::ostream& method(std::ostream& outy, const char *arg, unsigned int legs);
};

Now take a look at this code that violates the Law of Demeter:

void  ExampleClass::method(const char *arg, const Animals &animal)  {
    unsigned int localyOwned = 2;

    std::cout << arg << std::endl;         // bad

    if (animal.anAardvark().legs() != 4)   // bad
        methodFromParentClass(arg);    // okay

    // Another attempt to do the same things 
    // but the violation of data isolation is still present
    const Aardvark &aardvark = animal.anAardvark();
    if (aardvark().legs() != 4)                   // still bad
        methodFromParentClass(arg);    // okay

    localyOwned += 42;                       // okay

    // ... 
}

The primary problem is that if Animal is an object that refers to external resources, your mock object to replace it in a unit test must also replicate the Aardvark class. More importantly, in program maintenance terms, you’ve created a dependency binding on Animal when all you need is Aardvark. If Animal changes you may need to also modify this routine, even though Aardvark is unchanged. There is a reason why references with more than one dot or arrow is called a train wreck.

Of course for every rule there are exceptions. Robert “Uncle Bob” C. Martin in Clean Code (http://www.goodreads.com/book/show/3735293-clean-code)

    differentiates between plain old structs and objects. Structs may contain other structs so it seems an unnecessary complication to try to avoid more than one dot. I can see the point, but when I’m reading code, unless I have the header handy, I don’t necessarily know whether I’m looking at reference to a struct or a class. I compromise. If a struct is always going to be in a C-like primitive fashion, I declare it as a struct. If I add a function or constructor then I change the declaration to a class, and add the appropriate public, private and protected access attributes.

    Its been too long since my last post. In lieu of a coding joke, I’m including a link to my own C++ Unit Extra Lite Testing framework: https://github.com/gsdayton98/CppUnitXLite.git.

    To get it do a

      git clone https://github.com/gsdayton98/CppUnitXLite.git
    

    For a simple test program just include CppUnitXLite/CppUnitLite.cpp (that’s right include the C++ source file because it contains the main program test driver). Read the comments in the header file on suggestions on its use. Notice there is no library, no Google “pump” to generate source code, and no Python or Perl needed. Have fun and please leave me some comments and suggestions. If you don;t like the framework, tell me. I might learn something from you. Besides, I’m a big boy, I can take a little criticism.

Woodpecker Apocalypse

Weinberg’s woodpecker is here, as in the the woodpecker in “If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization” (Gerald M. Weinberg, The Psychology of Computer Programming, 1971).

We’ve put our finances, health information, and private thoughts on-line, entrusting them to software written in ignorance.  Hackers exploit the flaws in that software to get your bank accounts, credit cards, and other personal information.  We protected it all behind passwords with arbitrary strength rules that we humans must remember.  Humans write the software that accepts your passwords and other input.  Now comes the woodpecker part.

Being trusting souls, we’ve written our applications to not check their inputs, and depend upon the user to not enter too much.  Being human, we habitually write programs with buffer overruns, accept tainted input, and divide by zero. We write crappy software.  Heartbleed and Shellshock and a myriad of other exploits use defects in software to work their evil.

Security “experts”, who make their money by making you feel insecure, tell you its impossible to write perfect software.  Balderdash.  You can write small units, and exercise every pathway in small units.  You have a computer after all.  Use the computer to elaborate the code pathways and then use the computer to generate test cases.  It is possible to exercise every path over small units.  Making the small units robust makes it easier to isolate what’s going wrong in the larger systems.  If you have two units that are completely tested, so you know they behave reasonably no matter what garbage is thrown at them, then testing the combination is sometimes redundant.  Testing software doesn’t need to be combinatorially explosive.  If you test every path in one module A and every path in module B, you don’t need to test the combination — except when the modules share resources (the evilness of promiscuous sharing is another topic).  Besides, even if we couldn’t write perfect software doesn’t mean we shouldn’t try.

Barriers to quality are only a matter of imagination rather than fact.  How many times have you heard a manager say spending the time or buying the tool was too much, even though we’ve known since the 1970s that bugs caught at the developers desk cost ten times less than bugs caught later.  The interest on the technical debt is usury.  This suggests we can spend a lot more money up front on quality processes, avoid technical debt, and come out money ahead in the long run.  Bern and Schieber did their study in the 1970s.  I found this related NIST report from 2000:

NIST Report

The Prescription, The Program, The Seven Steps

Programmers cherish their step zeroes.  In this case,  step zero is just making the decision to do something about quality.   You’re reading this so I hope you’ve already made the decision, but just in case, though, let’s list the benefits of a quality process:

  • Avoid the re-work of bugs.  A bug means you need to diagnose, test, reverse-engineer, and go over old code.  A bug is a manifestation of technical debt.  If you don’t invest in writing and performing the tests up front you are incurring technical debt with 1000% interest.
  • Provide guarantees of security to your customers.  Maybe you can’t stop all security threats, but at least you can tell your customers what you did to prevent the known ones.
  • Writing code with tests is faster than writing code without.  Beware of studies that largely use college student programmers, but studies show that programmers using test driven development are 15% more productive.  This doesn’t count the amount of time the organization isn’t spending on bugs.
  • Avoid organizational death.  I use a rule of thumb about the amount of bug fixing an organization does.  I call it the “Rule of the Graveyard Spiral”.  In my experience any organization spending more than half of its time fixing bugs has less than two years to live, which is about the time the customers, or sponsoring management lose patience and cut-off the organization.

So, lets assume you have made the decision to get with the program and do something about quality.  Its not complicated.    A relatively simple series of steps instill quality and forestall installing technical debt into your program.  Here’s a simple list:

  1. Capture requirement with tests.  Write a little documentation.
  2. Everyone tests.  Test everything.  Use unit tests.
  3. Use coverage analysis to ensure the tests cover enough.
  4. Have someone else review your code. Have a coding standard.
  5. Check your code into a branch with equivalent level of testing.
  6. When merging branches, run the tests.  Branch merges are test events.
  7. Don’t cherish bugs.  Every bug has a right to a speedy trial.  Commit to fixing them or close them.

Bear in mind that implementing this process on your own is different than persuading an organization to apply the process.  Generally, if a process makes a person’s job easier, they will follow it.  The learning curve on a test driven process can be steeper than you expect because you must design a module, class, or function to be testable.  More on that later. 

On top of that, you need to persuade the organization that writing twice as much code (the test and the functional code) is actually faster than writing just the code and testing later.  In most organizations, though, nothing succeeds like success.  In my personal experience the developers who learned to write testable code and wrote unit tests never go back to the old way of doing things.  On multiple occasions putting legacy code that was causing customer escalations under unit test eliminated all customer escalations.  Zero is a great number for number of bugs.

Details

  1. Capture requirements with tests.

Good requirements are quantifiable and testable.  You know you have a good requirement when  you can build an automated test for it. Capture your requirements in tests.  For tests on behavior of a GUI use a tool like Sikuli (http://www.sikuli.org/).  If you’re testing boot time behavior, use a KVM switch and a second machine to capture the boot screens.  Be very reluctant to accept a manual test.  Be very sure that the test can’t be automated.  Remember the next developer that deals with your code may not be as diligent as you so manual tests become less likely to be re-run when the code is modified.


Closely related to capturing your requirements in tests, is documenting your code.  Documentation is tough.  Whenever you write two related things in two different places, those two different things will get out of sync and become obsolete in relationship to the other.

It might as well be a law of configuration management:  Any collection residing in two or more places will diverge.

So put the documention and code in the same place.  Use doxygen (http://www.stack.nl/~dimitri/doxygen/) .  Make your code self documenting.  Pay attention to the block of documentation at the top of the file where you can describe how the pieces work together.  On complicated systems, bite the bullet and provide an external file that describes how it all works together.   The documentation in the code tends to deal with only that code and not its related neighbors, so spend some time describing how it works together.  Relations are important.

You need just enough external documentation to tell the next developer where to start.  I like to use a wiki for my projects.  As each new developer comes into the project I point them to the wiki, and I ask them to update the wiki where they had trouble due to incompleteness or obsolescence.  I’m rather partial to MediaWiki (https://www.mediawiki.org/wiki/MediaWiki).  For some reason other people like Confluence (http://www.atlassian.com/Confluence ).  Pick your own wiki at http://www.wikimatrix.org/ .

Don’t go overboard on documentation. Too much means nobody will read it nor maintain it so it will quickly diverge to having little relation to the original code.  Documentation is part of the code.  Change the code or documentation, change the other.

Steps 2 through 7 deserve their own posts.

I’m past due on introducing myself.  I’m Glen Dayton.  I wrote my first program, in FORTRAN, in 1972.  Thank you Mr. McAfee.   Since then I’ve largely worked in aerospace, but then I moved to the Silicon Valley to marry my wife and take my turn on the start-up merry-go-around.  Somewhere in the intervening time Saint Wayne V. introduced me to test driven development.  After family and friends, the most important thing I ever worked on was PGP.


Today’s coding joke is the Double Check Locking Pattern.  After all these years I still find people writing it.  Read about it and its evils at

C++ and the Perils of Double-Checked Locking

When you see the the following code, software engineers will forgive you if you scream or laugh:

static Widget *ptr = NULL;
static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;

// ...
if (ptr == NULL)
{
  pthread_mutex_lock(&lock);
    if (ptr == NULL)
       ptr = new Widget;
    pthread_mutex_unlock(&lock);
}
return ptr;

One way to fix the code is to just use the lock.  Most modern operating systems implement a mutex with a spin lock so you don’t need to be shy about using them:

using boost::mutex;
using boost::lock_guard;

static Widget *ptr = NULL;
static mutex mtx;

//...

{
    lock_guard<mutex> lock(mtx);
    if (ptr == NULL)
       ptr = new Widget;
}
return ptr;

Another way, if you’re still shy about locks, is to use memory ordering primitives.  C++11 offers atomic variables and memory ordering primitives.

#include <boost/atomic/atomic.hpp>
#include <boost/memory_order.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/locks.hpp>

class Widget
{
public:
  Widget();

  static Widget* instance();
private:
};
Widget*
Widget::instance()
{
  static boost::atomic<Widget *> s_pWidget(NULL);
  static boost::mutex s_mutex;

  Widget* tmp = s_pWidget.load(boost::memory_order_acquire);
  if (tmp == NULL)
  {
    boost::lock_guard<boost::mutex> lock(s_mutex);
    tmp = s_pWidget.load(boost::memory_order_relaxed);
    if (tmp == NULL) {
      tmp = new Widget();
      s_pWidget.store(tmp, boost::memory_order_release);
    }
  }
  return tmp;
}

If the check for the lock, though, occurs in a high traffic area, you may not want to pay the cost of flushing the data cache for every atomic check, so use a thread local variable for the check:

using boost::mutex;
using boost::lock_guard;

Widget*
Widget::instance()
{
    static __thread Widget *tlv_instance = NULL;
    static Widget *s_instance = NULL;
    static mutex s_mutex;

    if (tlv_instance == NULL)
    {
        lock_guard<mutex> lock(s_mutex);
        if (s_instance == NULL)
            s_instance = new Widget();
        tlv_instance = s_instance;
    }

    return tlv_instance;
}

Of course, everything is a trade-off. A thread local variable is sometimes implemented as an index into an array of values allocated for the thread, so it can be expensive.  Your mileage may vary.

Software Sermon

I’ve been accused of preaching when it comes to software process and quality, so I decided to own it — thus the name of my blog.

Our world is at a crossroads with ubiquitous surveillance and criminals exploiting the flaws in our software. The two issues go hand-in-hand.  Insecure software allows governments and criminal organizations to break into your computer, and use your computer to spy on you and others.  A lot of people think they don’t need to care because they’re too innocuous for government notice, and they don’t have enough for a criminal to bother stealing.

Problem is that everyone with an online presence, and everyone with an opinion has something to protect.  Thieves want to garner enough of your personal information to steal your credit.  Many bank online, access their health records online, and display their social life online.  Every government, including our own, at one time or another has suppressed what they thought was dissident speech.

So let’s talk about encrypting everything, and making the encryption convenient and powerful.  Before we get there, though, we have to talk about not writing crappy software.  All the security in the world does no good if you have a broken window.

My favorite language happens to be C++, so I’ll mostly show examples from that language.  Just to show problems are translatable into other languages I’ll toss in an example in Java.  I promise I will devote a entire future posting to why I hate Java, and provide the code to bring a Java server to its knees in less than 30 seconds.  Meanwhile I’ll also toss in examples in Java.  With every post I’ll try to include a little code.


Today’s little code snippet is about the use of booleans.  It actually has nothing to do with security and with me learning how to blog.  I hate it when I encounter the coding jokes

if (boolVariable == true || anotherBool == false) ...

It’s obvious that the author of that line didn’t understand evaluation of booleans.  When I asked about that line, the author claims “It’s more readable that way”.  Do me and other rational people a favor;  when creating a coding guideline or standard, never ever use “it’s more readable that way…”.  Beauty is in the eye of the beholder.  Many programmers actually expect idiomatic use of the language.  Know the language before claiming something is less readable than another.  In this particular case, the offending line defies logic.  What is the difference between

boolVariable == true

and

boolVariable == true == true == true ...

Cut to the chase and just write the expression as

if (boolVariable || ! anotherBool) ...

Believe it or not (try it out yourself by compiling with assembly output) the different styles make a difference in the generated code.  In debug mode the actual test of a word against zero gets generated with the Clang and GNU compilers.  Thankfully, the optimizing compilers will yield the same code.  It is helpful, though, to have the debug code close to the optimized code.


The above coding joke is related to using a conditional statement to set a boolean, for example:

if (aardvark > 5) boolVariable = true;

Basic problem here is you don’t know if the programmer actually meant  boolVariable = aardvark > 5  or did they mean

 boolVariable = boolVariable || aardvark > 5;

Write what you mean.