Tag Archives: dlang

Comparing Pythagorean triples in C++, D, and Rust

EDIT: As pointed out in the Rust reddit thread, the Rust version can be modified to run faster due to a suble difference between ..=z and ..(z+1). I’ve updated the measurements with rustc0 being the former and rustc1 being the latter. I’ve also had to change some of the conclusions.

You may have recently encountered and/or read this blog post criticising a possible C++20 implementation of Pythagorean triples using ranges. In it the author benchmarks different implemetations of the problem, comparing readability, compile times, run times and binary sizes. My main language these days is D, and given that D also has ranges (and right now, as opposed to a future version of the language), I almost immediately reached for my keyboard. By that time there were already some D and Rust versions floating about as a result of the reddit thread, so fortunately for lazy me “all” I had to next was to benchmark the lot of them. All the code for what I’m about to talk about can be found on github.

As fancy/readable/extensible as ranges/coroutines/etc. may be, let’s get a baseline by comparing the simplest possible implementation using for loops and printf. I copied the “simplest.cpp” example from the blog post then translated it to D and Rust. To make sure I didn’t make any mistakes in the manual translation, I compared the output to the canonical C++ implementation (output.txt in the github repo). It’s easy to run fast if the output is wrong, after all. For those not familiar with D, dmd is the reference compiler (compiles fast, generates sub-optimal code compared to modern backends) while ldc is the LLVM based D compiler (same frontend, uses LLVM for the backend). Using ldc also makes for a more fair comparison with clang and rustc due to all three using the same backend (albeit possibly different versions of it).

All times reported will be in milliseconds as reported by running commands with time on my Arch Linux Dell XPS15. Compile times were measured by excluding the linker, the commands being clang++ -c src.cpp, dmd -c src.d, ldc -c src.d, rustc --emit=obj src.rs for each compiler (obviously for unoptimised builds). The original number of iterations was 100, but the runtimes were so short as to be practically useless for comparisons, so I increased that to 1000. The times presented are a result of running each command 10 times and taking the mininum, except for the clang unoptmised build of C++ ranges because, as a wise lady on the internet once said, ain’t nobody got time for that (you’ll see why soon). Optimised builds were done with -O2 for clang and ldc,  -O -inline for dmd and -C opt-level=2 for rustc. The compiler versions were clang 7.0.1, dmd 2.083.1, ldc 1.13.0 and rustc 1.31.1. The results for the simple, readable, non-extensible version of the problem (simple.cpp, simple.d, simple.rs in the repo):

Simple          CT (ms)  RT (ms)

clang debug      59       599
clang release    69       154
dmd debug        11       369
dmd release      11       153
ldc debug        31       599
ldc release      38       153
rustc0 debug    100      8445
rustc0 release  103       349
rustc1 debug             6650
rustc1 release            217

C++ run times are the same as D when optimised, with compile times being a lot shorter for D. Rust stands out here for both being extremely slow without optimisations, compiling even slower than C++, and generating code that takes around 50% longer to run even with optimisations turned on! I didn’t expect that at all.

The simple implementation couples the generation of the triples with what’s meant to be done with them. One option not discussed in the original blog to solve that is to pass in a lambda or other callable to the code instead of hardcoding the printf. To me this is the simplest way to solve this, but according to one redditor there are composability issues that may arise. This might or might not be important depending on the application though. One can also compose functions in a pipeline and pass that in, so I’m not sure what the problem is. In any case, I wrote 3 implementations and timed them (lambda.cpp, lambda.d, lambda.rs):

Lambda          CT (ms)  RT (ms)

clang debug      188       597
clang release    203       154
dmd debug         33       368
dmd release       37       154
ldc debug         59       580
ldc release       79       154
rustc0 debug     111      9252
rustc0 release   134       352
rustc1 debug              6811
rustc1 release             154

The first thing to notice is, except for Rust (which was slow anyway), compile times have risen by about a factor of 3: there’s a cost to being generic. Run times seem unaffected except that the unoptimised Rust build got slightly slower. I’m glad that my intuition seems to have been right on how to extend the original example: keep the for loops, pass a lambda, profit. Sure, the compile-times went up but in the context of a larger project this probably wouldn’t matter that much. Even for C++, 200ms is… ok. Performance-wise, it’s now a 3-way tie between the languages, and no runtime penalty compared to the non-generic version. Nice!

Next up, the code that motivated all of this: ranges. I didn’t write any of it: the D version was from this forum post, the C++ code is in the blog (using the “available now” ranges v3 library), and the Rust code was on reddit. Results:

Range           CT (ms)   RT (ms)

clang debug     4198     126230
clang release   4436        294
dmd debug         90      12734
dmd release      106       7755
ldc debug        158      15579
ldc release      324       1045
rustc0 debug     128      11018
rustc0 release   180        422
rustc1 debug               8469
rustc1 release              168

I have to hand it to rustc – whatever you throw at it the compile times seem to be flat. After modifying the code as mentioned in the edit at the beginning, it’s now the fastest out of the 3!

dmd compile times are still very fast, but the generated code isn’t. ldc does better at optimising, but the runtimes are pretty bad for both of them. I wonder what changes would have to be made to the frontend to generate the same code as for loops.

C++? I only ran the debug version once. I’m just not going to wait for that long, and besides, whatever systematic errors are in the measurement process don’t really matter when it takes over 2 minutes. Compile times are abysmal, the only solace being that optimising the code only takes 5% longer time than to not bother at all.

In my opinion, none of the versions are readable. Rust at least manages to be nearly as fast as the previous two versions, with C++ taking twice as long as it used to for the same task. D didn’t fare well at all where performance is concerned. It’s possible to get the ldc optimised version down to ~700ms by eliminating bounds checking, but even then it wouldn’t seem to be worth the bother.

My conclusion? Don’t bother with the range versions. Just… don’t. Nigh unreadable code that compiles and runs slowly.

Finally, I tried the D generator version on reddit. There’s a Rust version on Hacker News as well, but it involves using rust nightly and enabling features, and I can’t be bothered figuring out how to do any of that. If any Rustacean wants to submit something that works with a build script, please open a PR on the repo.

Generator      CT (ms)  RT (ms)

dmd debug      208      381
dmd release    222      220
ldc debug      261      603
ldc release    335      224

Compile times aren’t great (the worst so far for D), but the runtimes are quite acceptable. I had a sneaky suspicion that the runtimes here are slower due to startup code for std.concurrency so I increased N to 5000 and compared this version to the simplest possible one at the top of this post:

N=5k                    RT (ms)
dmd relase simple       8819
dmd release generator   8875

As expected, the difference vanished.

Final conclusions: for me,  passing a lambda is the way to go, or generators if you have them (C++ doesn’t have coroutines yet and the Rust version mentioned above needs features that are apparently only available in the nightly builds). I like ranges in both their C++ and D incarnations, but sometimes they just aren’t worth it. Lambdas/generators FTW!

Advertisements
Tagged , , , , ,

Improvements I’d like to see in D

D, as any language that stands on the shoulder of giants, was conceived to not repeat the errors of the past, and I think it’s done an admirable job at that. However, and perfectly predictably, it made a few of its own. Sometimes, similar to the ones it avoided! In my opinion, some of them are:

No UFCS chain for templates.

UFCS is a great feature and was under discussion to be added to C++, but as of the C++17 standard it hasn’t yet. It’s syntatic sugar for treating a free function as a member function if the first parameter is of that type, i.e. allows one to write obj.func(3) instead of func(obj, 3). Why is that important? Algorithm chains. Consider:

range.map!fun.filter!gun.join

Instead of:

join(filter!gun(map!fun(range)));

It’s much more readable and flows more naturally. This is clearly a win, and yet I’m talking about it in a blog post about D’s mistakes. Why? Because templates have no such thing. There are compile-time “type-level” versions of both map and filter in D’s standard library called staticMap and Filter (the names could be more consistent). But they don’t chain:

alias memberNames = AliasSeq!(__traits(allMembers, T));
alias Member(string name) = Alias!(__traits(getMember, T, name));
alias members = staticMap!(Member, memberNames);
alias memberFunctions = Filter!(isSomeFunction, members);

One has to name all of these intermediate steps even though none of them deserve to be named, just to avoid writing a russian doll parenthesis unreadable nightmare. Imagine if instead it were:

alias memberFunctions = __traits(allMembers, T)
    .staticMap!Member
    .Filter!(isSomeFunction);

One can dream. Which leads me to:

No template lambdas.

In the hypothetical template UFCS chain above, I wrote staticMap!Member, where the Member definition is as in the example before it. However: why do I need to name it either? In “regular” code we can use lambdas to avoid naming functions. Why can’t I do the same thing for templates?

alias memberFunctions = __traits(allMembers, T)
    .staticMap!(name => Alias!(__traits(getMember, T, name)))
    .Filter!isSomeFunction

Eponymous templates

Bear with me: I think the idea behind eponymous templates is great, is just that the execution isn’t, and I’ll explain why by comparing it to something D got right: constructors and destructors. In C++ and Java, these special member functions take the name of the class, which makes refactoring quite a bit annoying. D did away with that by naming them this and ~this, which makes the class name irrelevant. The way to do eponymous templates right is to (obviously renaming the feature) follow D’s own lead here and use either this or This to refer to itself. I’ve lost count of how many times I’ve introduced a bug due to template renaming that was hard to find.

@property getters

What are these for given the optional parentheses for functions with no parameters?

inout

Template this can accomplish the same thing and is more useful anyway.

Returning a reference

It’s practically pointless. Variables can’t be ref, so assigning it to a function that returns ref copies the value (which is probably not what the user expected), so one might as well return a pointer. The exception would be UFCS chains, but before DIP1016 is accepted and implemented, that doesn’t work either.

Wrong defaults

I think there’s a general consensus that @safe, pure and immutable should be default. Which leads me to:

Attribute soup

I’ve lost count now of how many times I’ve had to write @safe @nogc pure nothrow const scope return. Really.

Tagged ,

Implementing Rust’s std::sync::Mutex in D

TL;DR? https://github.com/atilaneves/fearless

The first time I encountered a mutex in C++ I was puzzled. It made no sense to me at all that I was locking one to protect some data and the only way to indicate what data was protected by a certain mutex was a naming convention. It seemed to me like a recipe for disaster, and of course, it is.

I’ve hardly written any code in Rust, in fact only one project to learn the basics of the language. But the time I spent with it was enough to marvel at std::sync::Mutex – at last it made sense! Access to the variable has to go through the mutex’s API, and no convention is needed. And, of course, in the Rust tradition said access is safe.

That unsurprisingly made me slightly jealous. In D, shared is a keyword and it protects the programmer from inadvertently accessing shared state in an unsafe manner (mostly). Atomic operations typically take a pointer to a shared T, but larger objects (i.e. user-defined structs) are usually dealt with by locking a mutex, casting away shared and then using the object as thread-local. While this works, it’s tedious, error-prone, and certainly not safe. Since imitation is the highest form of flattery, I decided to shamelessly copy, as much as possible, the idea behind Rust’s Mutex.

Rust makes the API safe via the borrow checker, but D doesn’t have that. It does, however, have the sort-of-still-experimental DIP1000, which is similar in what it tries to achieve. I set out to use the new functionality to try and devise a safe way to avoid the current practice of BYOM (Bring Your Own Mutex).

I started off by reading the concurrency part of The Rust Book, which was very helpful. It even explains implementation details such as the fact that .borrow returns a smart pointer instead of the wrapped type. This too I copied. I then started thinking of ways to use D’s scope to emulate Rust’s borrow checker. The idea wasn’t to have the same semantics but to enable safe usage and fail to compile unsafe code. Pragmatism was key.

I was initially confused about why std::sync::Mutex is nearly always used with std::sync::Arc – it took me writing a bug to realise that shared data is never allocated on the stack. Obvious in retrospect but I somehow failed to realise that. Since Rust doesn’t have a mark-and-sweep GC, the only real option is to use reference counting for the heap-allocated shared data. This realisation led to another: in D there’s a choice between reference counting and simply using GC-allocated memory, so the API reflects that. The RC version is even @nogc!

The API forces the initialisation of the user-defined type to happen by passing parameters to the constructor of that type. This is because passing an extant object isn’t safe – other references to it may exist in the program and data races could occur. Rust can do this and guarantee at compile-time that other mutable references don’t exist, but D can’t, hence the restriction. For types without mutable indirections the restriction is lifted, made possible by D’s world class static reflection capabilities. The API also enforces that the type is shared – there’s no point to using this library if the type isn’t, and even less of a point making the user type `shared T` all the time.

Although D has an actor model message passing library in std.concurrency, none of the functions are @safe. I also realised it would be trivial to write a deadlock by sending the shared data while a mutex is held to another thread. To fix both of these issues, the library I wrote has @safe versions of D’s concurrency primitives, and the send function checks to see if the mutex is locked before actually passing the compound (mutex, T) type (named Exclusive in the library) to another thread of execution.

DIP1000 itself was hard to understand. I ended up reading the proposal several times, and it didn’t help that the current implementation doesn’t match that document 100%. In the end, however, the result seems to work:

https://github.com/atilaneves/fearless

It’s possible that, due to bugs in DIP1000 or in fearless itself that a programmer can break safety, but to the extent of my knowledge this brings @safe concurrent code to D.

I’d love it if any concurrency experts could try and poke holes in the library so I can fix them.

Tagged , , , , , ,

Keep D unittests separated from production code

D has built-in unit tests, and unittest is even a keyword. This has been fantastically successful for the language, since there is no need to use an external framework to write tests, it comes with the compiler. Just as importantly, a unittest after a function can be used as documentation, with the test(s) showing up as “examples”. This is the opposite approach of running code in documentation as tests in Python – generate documentation from the tests instead.

As such, in D (similarly to Rust), it’s usual, idiomatic even, to have the tests written next to the code they’re testing. It’s easy to know where to see examples of the code in action: scroll down a bit and there are the unit tests.

I’m going to argue that this is an anti-pattern.

Let me start by saying that some tests should go along with the production code. Exactly the kind of “examply” tests that only exercise the happy path. Have them be executable documentation, but only have one of those per function and keep them short. The others? Hide them away as you would in C++. Here’s why I think that’s the case:

They increase build times.

If you edit a test, and that test lives next to production code, then every module that imports that module has to be rebuilt, because there’s currently no good way to figure out whether or not any of the API/ABI of that module has changed. Essentially, every D module is like a C++ header, and you go and recompile the world. D compiles a lot faster than C++, but when you’re doing TDD (in my case, pretty much always), every millisecond in build times count.

If the tests are in their own files, then editing a test means that usually only one file needs to be recompiled. Since test code is code, recompiling production code and its tests takes longer than just compiling production code alone.

I’m currently toying with the idea of trying to compile per package for production code but per module for test code – the test code shouldn’t have any dependencies other than the production code itself. I’ll have to time it to make sure it’s actually faster.

version(unittest) will cause you problems if you write libraries.

Let’s say that you’re writing a library. Let’s also say that to test that library you want to have a dependency on a testing library from http://code.dlang.org/, like unit-threaded. So you add this to your dub.sdl:

configuration "default" {
}
configuration "unittest" {
     dependency "unit-threaded" version="~>0.7.0"
}

Normal build? No dependency. Test build? Link to unit-threaded, but your clients never have the extra dependency. Great, right? So you want to use unit-threaded in your tests, which means an import:

module production_code;
version(unittest) import unit_threaded;

Now someone goes and adds your library as a dependency in their dub.sdl, but they’re not using unit-threaded because they don’t want to. And now they get a compiler error because when they compile their code with -unittest, the compiler will try and import a module/package that doesn’t exist.

So instead, the library has to do this in their dub.sdl;

configuration "unittest" {
    # ...
    versions "TestingMyLibrary"
}

And then:

version(TestingMyLibrary) import unit_threaded;

It might even be worse – your library might have code that should exist for version(unittest) but not version(TestingMyLibrary) – it’s happend to me. Even in the standard library, this happened.

Keep calm and keep your tests separated.

You’ll be happier that way. I am.

Tagged ,

On ESR’s thoughts on C and C++

ESR wrote two blog posts about moving on from C recently. As someone who has been advocating for never writing new code in C again unless absolutely necessary, I have my own thoughts on this. I have issues with several things that were stated in the follow-up post.

C++ as the language to replace C. Which ain’t gonna happen” – except it has. C++ hasn’t completely replaced C, but no language ever will. There’s just too much of it out there. People will be maintaining C code 50 years from now no matter how many better alternatives exist. If even gcc switched to C++…

It’s true that you’re (usually) not supposed to use raw pointers in C++, and also true that you can’t stop another developer in the same project from doing so. I’m not entirely sure how C is better in that regard, given that _all_ developers will be using raw pointers, with everything that entails. And shouldn’t code review prevent the raw pointers from crashing the party?

if you can mentally model the hardware it’s running on, you can easily see all the way down” – this used to be true, but no longer is. On a typical server/laptop/desktop (i.e. x86-64), the CPU that executes the instructions is far too complicated to model, and doesn’t even execute the actual assembly in your binary (xor rax, rax doesn’t xor anything, it just tells the CPU a register is free). C doesn’t have the concept of cache lines, which is essential for high performance computing and on any non-trivial CPU.

One way we can tell that C++ is not sufficient is to imagine an alternate world in which it is. In that world, older C projects would routinely up-migrate to C++“. Like gcc?

Major OS kernels would be written in C++“. I don’t know about “major”, but there’s  BeOS/Haiku and IncludeOS.

Not only has C++ failed to present enough of a value proposition to keep language designers uninterested in imagining languages like D, Go, and Rust, it has failed to displace its own ancestor.” – I think the problem with this argument is the (for me) implicit assumption that if a language is good enough, “better enough” than C, then logically programmers will switch. Unfortunately, that’s not how humans behave, as as much as some of us would like to pretend otherwise, programmers are still human.

My opinion is that C++ is strictly better than C. I’ve met and worked with many bright people who disagree. There’s nothing that C++ can do to bring them in – they just don’t value the trade-offs that C++ makes/made. Some of them might be tempted by Rust, but my anedoctal experience is that those that tend to favour C over C++ end up liking Go a lot more. I can’t stand Go myself, but the things about Go that I don’t like don’t bother its many fans.

My opinion is also that D is strictly better than C++, and I never expect the former to replace the latter. I’m even more fuzzy on that one than the reason why anybody chooses to write C in a 2017 greenfield project.

My advice to everyone is to use whatever tool you can be most productive in. Our brains are all different, we all value completely different trade-offs, so use the tool that agrees with you. Just don’t expect the rest of the world to agree with you.

 

Tagged , , ,

API clarity with types

API design is hard. Really hard. It’s one of the reasons I like TDD – it forces you to use the API as a regular client and it usually comes out all the better for it. At a previous job we’d design APIs as C headers, review them without implementation and call it done. Not one of those didn’t have to change as soon as we tried implementing them.

The Win32 API is rife with examples of what not to do: functions with 12 parameters aren’t uncommon. Another API no-no is several parameters of the same type – which means which? This is ok:

auto p = Point(2, 3);

It’s obvious that 2 is the x coordinate and 3 is y. But:

foo("foo", "bar", "baz", "quux", true);

Sure, the actual strings passed don’t help – but what does true mean in this context? Languages like Python get around this by naming arguments at the call site, but that’s not a feature of most curly brace/semicolon languages.

I semi-recently forked and extended the D wrapper for nanomsg. The original C API copies the Berkely sockets API, for reasons I don’t quite understand. That means that a socket must be created, then bound or connect to another socket. In an OOP-ish language we’d like to just have a contructor deal with that for us. Unfortunately, there’s no way to disambiguate if we want to connect to an address or bind to it – in both cases a string is passed. My first attempt was to follow in Java’s footsteps and use static methods for creation (simplified for the blog post):

struct NanoSocket {
    static NanoSocket createBound(string uri) { /* ... */ }
    static NanoSocket createConnected(string uri) { /* ... */ }
    private this() { /* ... */ } // constructor
}

I never did feel comfortable: object creation shouldn’t look *weird*. But I think Haskell has forever changed by brain, so types to the rescue:

struct NanoSocket {
    this(ConnectTo connectTo) { /* ... */ }
    this(BindTo bindTo) { /* ... */ }
}

struct ConnectTo {
    string uri;
}

struct BindTo {
    string uri;
}

I encountered something similar when I implemented a method on NanoSocket called trySend. It takes two durations: a total time to try for, and an interval to wait to try again. Most people would write it like so:

void trySend(ubyte[] data, 
             Duration totalDuration, 
             Duration retryDuration);

At the call site clients might get confused about which order the durations are in. I think this is much better, since there’s no way to get it wrong:

void trySend(ubyte[] data, 
             TotalDuration totalDuration, 
             RetryDuration retryDuration);

struct TotalDuration {
    Duration duration;
}

struct RetryDuration {
    Duration duration;
}

What do you think?

Tagged , , , , , , , ,

C is not magically fast, part 2

I wrote a blog post before about how C is not magically fast, but the sentiment that C has properties lacking in other languages that make it so is still widespread. It was with no surprise at all then that a colleague mentioned something resembling that recently at lunch break, and I attempted to tell him why it wasn’t (at least always) true.

He asked for an example where C++ would be faster, and I resorted to the old sort classic: C++ sort is faster than C’s qsort because of templates and inlining. He then asked me if I’d ever measured it myself, and since I hadn’t, I did just that after lunch. I included D as well because, well, it’s my favourite language. Taking the minimum time after ten runs each to sort a random array of 10M simple structs on my laptop yielded the results below:

  • D: 1.147s
  • C++: 1.723s
  • C: 1.789s

I expected  C++ to be faster than C, I didn’t expect the difference to be so small. I expected D to be the same speed as C++, but for some reason it’s faster. I haven’t investigated the reason why for lack of interest, but maybe because of how strings are handled?

I used the same compiler backend for all 3 so that wouldn’t be an influence: LLVM. I also seeded all of them with the same number and used the same random number generator: the awful srand from C’s standard library. It’s terrible, but it’s the only easy way to do it in standard C and the same function is available from the other two languages. I also only timed the sort, not counting init code.

The code for all 3 implementations:

// sort.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include <sys/resource.h>

typedef struct {
    int i;
    char* s;
} Foo;

double get_time() {
    struct timeval t;
    struct timezone tzp;
    gettimeofday(&t, &tzp);
    return t.tv_sec + t.tv_usec*1e-6;
}

int comp(const void* lhs_, const void* rhs_) {
    const Foo *lhs = (const Foo*)lhs_;
    const Foo *rhs = (const Foo*)rhs_;
    if(lhs->i < rhs->i) return -1;
    if(lhs->i > rhs->i) return 1;
    return strcmp(lhs->s, rhs->s);
}

int main(int argc, char* argv[]) {
    if(argc < 2) {
        fprintf(stderr, "Must pass in number of elements\n");
        return 1;
    }

    srand(1337);
    const int size = atoi(argv[1]);
    Foo* foos = malloc(size * sizeof(Foo));
    for(int i = 0; i < size; ++i) {
        foos[i].i = rand() % size;
        foos[i].s = malloc(100);
        sprintf(foos[i].s, "foo%dfoo", foos[i].i);
    }

    const double start = get_time();
    qsort(foos, size, sizeof(Foo), comp);
    const double end = get_time();
    printf("Sort time: %lf\n", end - start);

    free(foos);
    return 0;
}


// sort.cpp
#include <iostream>
#include <algorithm>
#include <string>
#include <vector>
#include <chrono>
#include <cstring>

using namespace std;
using namespace chrono;

struct Foo {
    int i;
    string s;

    bool operator<(const Foo& other) const noexcept {
        if(i < other.i) return true;
        if(i > other.i) return false;
        return s < other.s;
    }

};


template<typename CLOCK, typename START>
static double getElapsedSeconds(CLOCK clock, const START start) {
    //cast to ms first to get fractional amount of seconds
    return duration_cast<milliseconds>(clock.now() - start).count() / 1000.0;
}

#include <type_traits>
int main(int argc, char* argv[]) {
    if(argc < 2) {
        cerr << "Must pass in number of elements" << endl;
        return 1;
    }

    srand(1337);
    const int size = stoi(argv[1]);
    vector<Foo> foos(size);
    for(auto& foo: foos) {
        foo.i = rand() % size;
        foo.s = "foo"s + to_string(foo.i) + "foo"s;
    }

    high_resolution_clock clock;
    const auto start = clock.now();
    sort(foos.begin(), foos.end());
    cout << "Sort time: " << getElapsedSeconds(clock, start) << endl;
}


// sort.d
import std.stdio;
import std.exception;
import std.datetime;
import std.algorithm;
import std.conv;
import core.stdc.stdlib;


struct Foo {
    int i;
    string s;

    int opCmp(ref Foo other) const @safe pure nothrow {
        if(i < other.i) return -1;
        if(i > other.i) return 1;
        return s < other.s
            ? -1
            : (s > other.s ? 1 : 0);
    }
}

void main(string[] args) {
    enforce(args.length > 1, "Must pass in number of elements");
    srand(1337);
    immutable size = args[1].to!int;
    auto foos = new Foo[size];
    foreach(ref foo; foos) {
        foo.i = rand % size;
        foo.s = "foo" ~ foo.i.to!string ~ "foo";
    }

    auto sw = StopWatch();
    sw.start;
    sort(foos);
    sw.stop;
    writeln("Elapsed: ", cast(Duration)sw.peek);
}



Tagged , ,

Write custom assertions whenever possible

I’ve been very interested in readable tests with great error messages recently. Mostly because they kept failing and I wanted the most information possible in order to quickly identify the cause. This is another reason why I like TDD: you see the test failing first, so if the error message isn’t great you’ll know straight away instead of months later.

The good testing frameworks provide a way of writing your own custom assertions. I’d never really looked into them that much before, but now I realize the error of my ways. Recently I wrote a test that contained this line:

fileName.exists.shouldBeTrue;

Readable, right? The problem is when it fails:

foo.d:42 - Expected: true
foo.d:42 -      Got: false

And now you have to go read the test and figure out what went wrong. It’s a lot better to get the information that a file was supposed to exist instead right away. So I wrote a custom assertion and was then ready to write this:

fileName.shouldExist;

With the corresponding failure message:

foo.d:42 - Expected /tmp/foo.txt to exist but it didn't

Now it’s a lot easier to pinpoint where the problem is. For starters, you would probably want to start checking the contents of the surrounding directory, having saved the time you would have had to spend figuring out what exactly was false.

Tagged ,

main is just another function

Last week I talked about code that isn’t unit-testable, at least not by my definition of what a unit test is. In keeping with that, this blog post will talk about testing code that has side-effects.

Recently I’d come to accept a defeatist attitude where I couldn’t think of any other way to test that passing certain command-line options to a console binary had a certain effect. I mean, the whole point is to test that running the app differently will have different consequences. As a result I ended up only ever doing end-to-end testing. And… that’s simply not where I want to be.

Then it dawned on me: main is just another function. Granted, it has a special status that makes it so you can’t call it directly from a test, but nearly all my main functions lately have looked like this:

int main(string[] args) {
    try {
        doStuff(args);
        return 0;
    } catch(Exception ex) {
        stderr.writeln(ex.msg);
        return 1;
    }
}

It should be easy enough to translate this to the equivalent C++ in your head. With main so conveniently delegating to a function that does real work, I can now easily write integration tests. After all, is there really any difference between:

doStuff(["myapp", "--option", "arg1", "arg2"]);
// assert stuff happened

And (in, say, a shell script):

./myapp --option arg1 arg2
# assert stuff happened

I’d say no. This way I have one end-to-end test for sanity’s sake, and everything else being tested from the same binary by calling the “real” main function directly.

If your main doesn’t look like the one above, and you happen to be writing C or C++, there’s another technique: use the preprocessor to rename main to something else and call it from your integration/component test. And then, as they say, Bob’s your uncle.

Happy testing!

Tagged , , ,

unit-threaded: now an executable library

It’s one of those ideas that seem obvious in retrospect, but somehow only ocurred to me last week. Let me explain.

I wrote a unit testing library in D called unit-threaded. It uses D’s compile-time reflection capabilities so that no test registration is required. You write your tests, they get found automatically and everything is good and nice. Except… you have to list the files you want to reflect on, explicitly. D’s compiler can’t go reading the filesystem for you while it compiles, so a pre-build step of generating the file list was needed. I wrote a program to do it, but for several reasons it wasn’t ideal.

Now, as someone who actually wants people to use my library (and also to make it easier for myself), I had to find a way so that it would be easy to opt-in to unit-threaded. This is especially important since D has built-in unit tests, so the barrier for entry is low (which is a good thing!). While working on a far crazier idea to make it a no-brainer to use unit-threaded, I stumbled across my current solution: run the library as an executable binary.

The secret sauce that makes this work is dub, D’s package manager. It can download dependencies to compile and even run them with “dub run”. That way, a user need not even have to download it. The other dub feature that makes this feasible is that it supports “configurations” in which a package is built differently. And using those, I can have a regular library configuration and an alternative executable one. Since dub run can take a configuration as an argument, unit-threaded can now be run as a program with “dub run unit-threaded -c gen_ut_main”. And when it is, it generates the file that’s needed to make it all work.

So now all a user need to is add a declaration to their project’s dub.json file and “dub test” works as intended, using unit-threaded underneath, with named unit tests and all of them running in threads by default. Happy days.

Tagged , , , , ,