Why I find developing on/for Windows exasperating

I ran DOS on my first PC. The natural progession unfolded, with me then running Windows 95, Windows 98, and Windows XP after that (Windows ME, like the Matrix sequels, was a collective bad dream that didn’t really happen). I used Borland’s IDE to write C code, then RHIDE with DJGPP since I couldn’t even imagine using a compiler from the command-line. I say that because I wasn’t “brought up” using *nix at all, and my only exposure was at university. These days however, I do nearly all of my development on Linux. Why? I find it to be a much, much better experience.

Somewhat unfortunately for me, my current job requires me to do Windows development. And every time I boot into Windows or have to fix Windows-specific problems, it makes me want to cry. Why? Let me name some of the reasons why.

Speed, or the lack thereof. I haven’t done a thorough scientific analysis on this, because I don’t think it’d be worth my while to do so. It seems clear to me that NTFS is very very slow. Doing anything on it, from running CMake to compiling to linking, seems to take forever. To the point that it makes me actively wonder how anyone manages to get anything done on Windows. I can rebuild the reference D compiler on my laptop in about 1.6s after modifying one file. On Windows the same build, on the same machine, takes ~1 minute. Given that I find 1.6s infuriatingly slow, you can imagine what sorts of dark swear words I reserve for waiting for a whole minute while what would have been considered a supercomputer a few years ago decides to go get anything done.

Dependencies. Unlike *nix, there is no standard path(s) to look up libraries. Granted, even different Linux distros use different conventions and paths from each other, but libraries are usually installed with a package manager anyway so mostly you don’t care.  And if you did, your linker would find them anyway without the need for extra flags. Need to link to, say, nanomsg on Windows? Good luck with that. Ah, but there’s vcpkg, I hear you say. Apparently Visual Studio auto-magically finds the libraries that vcpkg “installs”. Job done if you’re clicking a button in an IDE, not so much if you’re using a real build system running in CI. It _could_ be just as easy as adding a flag to your linker, but, alas, the .lib files don’t all end up in the same directory. vcpkg allows me to download libraries without having to write Powershell, but then actually linking is, for lack of a better word, “fun”. On Linux? pacman -S nanomsg; ninja

Batch files and/or powershell. I personally find bash horrible to write code in, but then I do Windows work and remember there’s worse. So much worse. Sigh.

Bash. I’ll explain. Git bash is amazing, I remember a time before that existed (I tried, unsucessfully, to compile bash from source for Windows with at least 3 different implementations back in the day). So why am I complaining? First of all, because I use zsh and haven’t seen an easy way to do that yet on Windows. Secondly, because building on Windows from the command-line often requires cmd.exe. Building C++ code? I’m not going to write my own bash version of vcvarsall.bat just to do that. Commands have a habit of spitting out error messages with backslashes (cos, duh, Windows), and good luck copying and pasting that into your bash shell.

Tooling. Want to create a zip? You’ll have to download and install a 3rd party tool. Oh, but the binary doesn’t get added to the PATH, so you’ll have to write out the full path in your batch file and pray one of your machines doesn’t install it to a different location.

Things are better than they used to be on Windows. We now have the Linux subsystem, git bash, and alternatives to the horrible built-in terminal emulator. To me, it just makes things less bad, and the moment I’m back on Arch Linux it feels like coming home from a not particularly good holiday.

Advertisements
Tagged , , ,

CppCon 2017 videos so far

I didn’t get to go to CppCon this year (I was there the previous two years), and now that some of the talks are up on youtube I’ve been checking them out to keep up-to-date. It’s been a little disappointing.

I don’t know if it’s because of the types of talk that I like, if I’ve learned a lot over the last few years or if the standards for admission have gone down. What I do know is that just today on my lunch break I dismissed quite a few talks after watching them for 5 minutes. Some might say that’s not enough, but I think I’m pretty good at evaluating a if a talk would be worth my time in that period.

So far, the only talk I’ve liked is the first one about IncludeOS. For me that’s not surprising, I think it’s a really cool project and have been interested in unikernels from the first time I heard about them. It helps that it happened when I was working at Cisco and learned that the project I worked on then was faster on class Cisco IOS (a monolithic kernel) than on the newer operating systems.

I might have to play around with IncludeOS now. I’m just afraid I’ll start getting ideas about writing a unikernel in D or Rust…

Commit failing tests if your framework allows it

In TDD, one is supposed to go through the 3-step cycle of:

  1. Write a failing test
  2. Make it pass
  3. Refactor

The common-sense approach is to not commit the failing test from the first step, since that would thrown a spanner in the works when you inevitably have to bisect your commit DAG trying to figure out where a bug was introduced.

I’ve come to a realisation recently – failing tests should be commited, but only if the testing framework being used allows you to mark failures as successes. For instance, in my D testing framework unit-threaded, I’d commit this silly example:

@ShouldFail("WIP")
unittest {
    1.shouldEqual(2);
}

If you’re not familiar with D, it has built-in unit tests, and unittest is a keyword. @ShouldFail is a User Defined Attribute, part of the library indicating that the unit test it applies to is expected to fail, and allows the user to specify an optional string describing why that’s the case. It could be a bug ID as well.

The test above passes if any of the code in the unittest block throws an exception, i.e. it passes if it fails. This way we can have a single commit of the failing test that motivated the code changes that follow it, and we can’t forget to remove @ShouldFail – in fact, if the programmer implements the feature / fixes the bug correctly, they should expect to see the test suite go red. If that doesn’t happen, either the production code or the test is buggy.

I’m not aware of many frameworks that allow a programmer to do this; pytest has something similar. If yours does, commit your failing tests.

Tagged , ,

On the novelty factor of compile-time duck typing

Or structural type systems for the pendantic, but I think most people know what I mean when I say “compile-time duck typing”.

For one reason or another I’ve read quite a few blog posts about how great the Go programming language is recently. A common refrain is that Go’s interfaces are amazing because you don’t have to declare that a type has to satisfy an interface; it just does if its structure matches (hence structural typing). I’m not sold on how great this actually is – more on that later.

What I don’t understand is how this is presented as novel and never done before. I present to you a language from 1990:

template <typename T>
void fun(const T& animal) {
    cout << "It says: " << animal.say() << endl;
}

struct Dog {
    std::string say() const { return "woof"; }
};

struct Cat {
    std::string say() const { return "meow"; }
};

int main() {
    fun(Dog());
    fun(Cat());
}

Most people would recognise that as being C++. If you didn’t, well… it’s C++. I stayed away from post-C++11 on purpose (i.e. Dog{} instead of Dog()). Look ma, compile-time duck typing in the 1990s! Who’d’ve thunk it?

Is it nicer in Go? In my opinion, yes. Defining an interface and saying a function only takes objects that conform to that interface is a good thing, and a lot better than the situation in C++ (even with std::enable_if and std::void_t). But it’s easy enough to do that in D (template contraints), Haskell (typeclasses), and Rust (traits), to name the languages that do something similar that I’m more familiar with.

But in D and C++, there’s currently no way to state that your type satisfies what you need it to due to an algorithm function requiring it (such as having a member function called “say” in the silly example above) and get compiler errors telling you why it didn’t satisfy it (such as  mispelling “say” as “sey”). C++, at some point in the future, will get concepts exactly to alleviate this. In D, I wrote a library to do it. Traits and typeclasses are definitely better, but in my point of view it’s good to be able to state that a type does indeed “look like” what it needs to do to be used by certain functions. At least in D you can say static assert(isAnimal!MyType); – you just don’t know why that assertion fails when it does. I guess in C++17 one could do something similar using std::void_t. Is there an equivalent for Go? I hope a gopher enlightens me.

All in all I don’t get why this gets touted as something only Go has. It’s a similar story to “you can link statically”. I can do that in other languages as well. Even ones from the 90s.

Tagged , , ,

The main function should be shunned

The main function (in languages that have it) is…. special. It’s the entry point of the program by convention, there can only be one of them in all the object files being linked, and you can’t run a program without it. And it’s inflexible.

Its presence means that the final output has to be an executable. It’s likely however, that the executable in question might have code that others might rather reuse than rewrite, but they won’t be able to use it in their own executables. There’s already a main function in there. Before clang nobody seemed to stumble on the idea that a compiler as a library would be a great idea. And yet…

This is why I’m now advocating for always putting the main function of an executable in its own file, all by itself. And also that it do the least amount of work possible for maximum flexibility. This way, any executable project is one excluded file away in the build system from being used as a library. This is how I’d start a, say, C++ executable project from scratch today:

#include "runtime.hpp"
#include <iostream>
#include <stdexcept>

int main(int argc, const char* argv[]) {
    try {
        run(argc, argv); // "real" main
        return 0;
    } catch(const std::exception& ex) {
        std::cout << "Oops: " << ex.what() << std::endl;
        return 1;
    }
}

In fact, I think I’ll go write an Emacs snippet for that right now.

Tagged ,

API clarity with types

API design is hard. Really hard. It’s one of the reasons I like TDD – it forces you to use the API as a regular client and it usually comes out all the better for it. At a previous job we’d design APIs as C headers, review them without implementation and call it done. Not one of those didn’t have to change as soon as we tried implementing them.

The Win32 API is rife with examples of what not to do: functions with 12 parameters aren’t uncommon. Another API no-no is several parameters of the same type – which means which? This is ok:

auto p = Point(2, 3);

It’s obvious that 2 is the x coordinate and 3 is y. But:

foo("foo", "bar", "baz", "quux", true);

Sure, the actual strings passed don’t help – but what does true mean in this context? Languages like Python get around this by naming arguments at the call site, but that’s not a feature of most curly brace/semicolon languages.

I semi-recently forked and extended the D wrapper for nanomsg. The original C API copies the Berkely sockets API, for reasons I don’t quite understand. That means that a socket must be created, then bound or connect to another socket. In an OOP-ish language we’d like to just have a contructor deal with that for us. Unfortunately, there’s no way to disambiguate if we want to connect to an address or bind to it – in both cases a string is passed. My first attempt was to follow in Java’s footsteps and use static methods for creation (simplified for the blog post):

struct NanoSocket {
    static NanoSocket createBound(string uri) { /* ... */ }
    static NanoSocket createConnected(string uri) { /* ... */ }
    private this() { /* ... */ } // constructor
}

I never did feel comfortable: object creation shouldn’t look *weird*. But I think Haskell has forever changed by brain, so types to the rescue:

struct NanoSocket {
    this(ConnectTo connectTo) { /* ... */ }
    this(BindTo bindTo) { /* ... */ }
}

struct ConnectTo {
    string uri;
}

struct BindTo {
    string uri;
}

I encountered something similar when I implemented a method on NanoSocket called trySend. It takes two durations: a total time to try for, and an interval to wait to try again. Most people would write it like so:

void trySend(ubyte[] data, 
             Duration totalDuration, 
             Duration retryDuration);

At the call site clients might get confused about which order the durations are in. I think this is much better, since there’s no way to get it wrong:

void trySend(ubyte[] data, 
             TotalDuration totalDuration, 
             RetryDuration retryDuration);

struct TotalDuration {
    Duration duration;
}

struct RetryDuration {
    Duration duration;
}

What do you think?

Tagged , , , , , , , ,

Don’t hoard code

For me, the two most important principles in programming are, in order, DRY and YAGNI. Most of my coding decisions ends up respecting one or the other. For some reason YAGNI seems to be less well known. In my experience one tends to get less pushback for DRY – it’s the accepted best practice. But YAGNI seems to need more persuasion, and I’m not entirely sure why.

I’m converted: I love red diffs. I don’t even look at the red sections during code review. Do the tests still pass? Ship it! The thing is that, despite me being a programmer and my “one job” (not really, but you know what I mean) being to write code, I hate code and want the least of it in my project. I mean it.

Code that doesn’t exist is excellent. It doesn’t have to be read, and therefore doesn’t need to be understood, which means it can’t confuse anyone. It doesn’t have bugs. It doesn’t need to be tested. What’s not to like?

And yet, in project after project, one sees code commented out for mostly no good reason. My personal “favourite” (by which I mean I froth at the mouth) is C or  C++ code with #if 0 / #endif pairs. In one project there were even multiple of those, and nested to boot.

Maybe it has to do with not trusting version control. If all you’ve ever used is one of those ancient paid-for systems (not naming any names but you can guess) and have never felt the bliss that is working with git or Mercurial then maybe it’s understandable. Because it might actually be hard to go look at the history and find when you deleted something or why. But these days? No excuse: git grep that_thing_that_I_remember_that_isn’t here_anymore.

And never mind that, in my experience at least, the times anybody goes code spelunking for deleted code are so few and far between that the trade-off is obvious. Code that hasn’t but should be deleted gets in the way. That’s a real cost, paid every day, and for… what? Because someone someday might need that snippet and it takes them an extra minute to find it?

YAGNI. Delete and move on.

Arch Linux – why use a Docker image when you can create your own?

It seems silly in retrospect. I’d never have considered building an Arch Linux based Docker container in any other way but starting with one from the registry and a Dockerfile. But… it’s Arch Linux, you install this distro into a directory and chroot into it. Why settle for someone else’s old installation?

The script used to bootstrap an Arch install, pacstrap, even lets you exclude some packages from the default install or add things you need. So I ended up with a bash script that installs Arch into a directory, chroots into it and runs commands as required, then bundles the whole thing into a docker container. Repeatable, checked in to version control, and not wasting layers of AUFS.

Who needs a Dockerfile?

C is not magically fast, part 2

I wrote a blog post before about how C is not magically fast, but the sentiment that C has properties lacking in other languages that make it so is still widespread. It was with no surprise at all then that a colleague mentioned something resembling that recently at lunch break, and I attempted to tell him why it wasn’t (at least always) true.

He asked for an example where C++ would be faster, and I resorted to the old sort classic: C++ sort is faster than C’s qsort because of templates and inlining. He then asked me if I’d ever measured it myself, and since I hadn’t, I did just that after lunch. I included D as well because, well, it’s my favourite language. Taking the minimum time after ten runs each to sort a random array of 10M simple structs on my laptop yielded the results below:

  • D: 1.147s
  • C++: 1.723s
  • C: 1.789s

I expected  C++ to be faster than C, I didn’t expect the difference to be so small. I expected D to be the same speed as C++, but for some reason it’s faster. I haven’t investigated the reason why for lack of interest, but maybe because of how strings are handled?

I used the same compiler backend for all 3 so that wouldn’t be an influence: LLVM. I also seeded all of them with the same number and used the same random number generator: the awful srand from C’s standard library. It’s terrible, but it’s the only easy way to do it in standard C and the same function is available from the other two languages. I also only timed the sort, not counting init code.

The code for all 3 implementations:

// sort.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include <sys/resource.h>

typedef struct {
    int i;
    char* s;
} Foo;

double get_time() {
    struct timeval t;
    struct timezone tzp;
    gettimeofday(&t, &tzp);
    return t.tv_sec + t.tv_usec*1e-6;
}

int comp(const void* lhs_, const void* rhs_) {
    const Foo *lhs = (const Foo*)lhs_;
    const Foo *rhs = (const Foo*)rhs_;
    if(lhs->i < rhs->i) return -1;
    if(lhs->i > rhs->i) return 1;
    return strcmp(lhs->s, rhs->s);
}

int main(int argc, char* argv[]) {
    if(argc < 2) {
        fprintf(stderr, "Must pass in number of elements\n");
        return 1;
    }

    srand(1337);
    const int size = atoi(argv[1]);
    Foo* foos = malloc(size * sizeof(Foo));
    for(int i = 0; i < size; ++i) {
        foos[i].i = rand() % size;
        foos[i].s = malloc(100);
        sprintf(foos[i].s, "foo%dfoo", foos[i].i);
    }

    const double start = get_time();
    qsort(foos, size, sizeof(Foo), comp);
    const double end = get_time();
    printf("Sort time: %lf\n", end - start);

    free(foos);
    return 0;
}


// sort.cpp
#include <iostream>
#include <algorithm>
#include <string>
#include <vector>
#include <chrono>
#include <cstring>

using namespace std;
using namespace chrono;

struct Foo {
    int i;
    string s;

    bool operator<(const Foo& other) const noexcept {
        if(i < other.i) return true;
        if(i > other.i) return false;
        return s < other.s;
    }

};


template<typename CLOCK, typename START>
static double getElapsedSeconds(CLOCK clock, const START start) {
    //cast to ms first to get fractional amount of seconds
    return duration_cast<milliseconds>(clock.now() - start).count() / 1000.0;
}

#include <type_traits>
int main(int argc, char* argv[]) {
    if(argc < 2) {
        cerr << "Must pass in number of elements" << endl;
        return 1;
    }

    srand(1337);
    const int size = stoi(argv[1]);
    vector<Foo> foos(size);
    for(auto& foo: foos) {
        foo.i = rand() % size;
        foo.s = "foo"s + to_string(foo.i) + "foo"s;
    }

    high_resolution_clock clock;
    const auto start = clock.now();
    sort(foos.begin(), foos.end());
    cout << "Sort time: " << getElapsedSeconds(clock, start) << endl;
}


// sort.d
import std.stdio;
import std.exception;
import std.datetime;
import std.algorithm;
import std.conv;
import core.stdc.stdlib;


struct Foo {
    int i;
    string s;

    int opCmp(ref Foo other) const @safe pure nothrow {
        if(i < other.i) return -1;
        if(i > other.i) return 1;
        return s < other.s
            ? -1
            : (s > other.s ? 1 : 0);
    }
}

void main(string[] args) {
    enforce(args.length > 1, "Must pass in number of elements");
    srand(1337);
    immutable size = args[1].to!int;
    auto foos = new Foo[size];
    foreach(ref foo; foos) {
        foo.i = rand % size;
        foo.s = "foo" ~ foo.i.to!string ~ "foo";
    }

    auto sw = StopWatch();
    sw.start;
    sort(foos);
    sw.stop;
    writeln("Elapsed: ", cast(Duration)sw.peek);
}



Tagged , ,

Dipping my toes in the property based testing pool

I’ve heard a lot about property-based testing in the last 2 years but haven’t really had a chance to try it out. I first heard about it when I was learning Haskell, but at the time I thought regular bog-standard unit testing was a better option. I didn’t want to learn a new language (and one notoriously difficult at that) and a new way of testing at the same time. Since then I haven’t written anything else in Haskell for multiple reasons and it’s always been something I’ve been wanting to try out.

I decided that the best way to do it is just to implement property-based testing myself, and I’ve started with preliminary support in my unit-threaded library for basic types and arrays thereof. The main issue was knowing how to write the new tests. It wasn’t clear at all and if you’re in the same situation I highly recommend this talk on the subject. Fortunately, one of the examples in those slides was serialization, and since I wrote a library for that too, I immediately started transitioning some pre-existing unit tests. I have to say that I believe the new tests are much, much better. Here’s one test “on paper” but that actually runs 100 random examples for each of the 17 types mentioned in the code, checking that serializing then deserializing should yield the same value:

@Types!(bool, byte, ubyte, short, ushort, int, uint, long, ulong,
        float, double,
        char, wchar, dchar,
        ubyte[], ushort[], int[])
void testEncodeDecodeProperty(T)() {
    check!((T val) {
        auto enc = Cerealiser();
        enc ~= val;
        auto dec = Decerealiser(enc.bytes);
        return dec.value!T == val;
    });
}

I think that’s a pretty good test/SLOC ratio. Now I just have to find more applications for this.