1. Using Vagrant and Puppet with Digital Ocean

    I’ve been moving my personal host to DigitalOcean . I’ve take the re-build as a chance to move to having my own host puppet managed as well

    A standard Puppet setup means having a puppet master, keys and more, but Vagrant provides a easy way to run puppet without a master and by using the Vagrant-DigitalOcean plugin, Vagrant can provision Digital Ocean droplets directly.

    The only quirk is that the standard Droplets don’t include Puppet so it’s necessary to use two Vagrant provisioners: a shell one to install puppet on first boot, and then the standard puppet one

    I’ve put a skeleton Vagrant/puppet setup on GitHub. 

    You’ll need to install the Digital Ocean vagrant plugin

    vagrant plugin install vagrant-digitalocean
    

    and then edit the Vagrantfile. At a minimum you need to set your hostname and API key details. You may also want to change the region and VM size. Then

    vagrant up --provider=digital_ocean
    

    As an example, the puppet manifest copies two files into the home directory of the vagrant user and also sets that user’s shell to be zsh

     
  2. Comments
  3. On learning Haskell

    It’s often claimed that geeks should learn a new language every year, and I’m heading towards a years worth of learning Haskell, so this seems a good time to take stock.

    Ploughing through old email it looks like I started roughly in October/November last year, and have been plodding away ever since. I have a small child, and this is my hobby/out-of-work project for the year, so this actually equates to a few hours per week on average, with a gap of a few months in the middle. Still, compared to any other language I’ve learned this is a long time indeed to get to a state of bare competence. I’m certainly not claiming to be a Haskell expert (more on that later), but I think I’m safely past the beginner stage and can make comments from a position of at least some knowledge. 

    Read More

     
  4. Comments
  5. Moderately private, moderately secure, browsing

    It’s a common assertion that you have no privacy on the web.

    You have zero privacy anyway. Get over it

         - Scott McNealy, then Sun CEO

    Most people don’t care. After all, their supermarket loyalty card lets the store track and store all their purchases.

    In the other corner, we have the tin-foil hat world where the Illuminati/Goldman Sachs/Lizards are following your every move. 

    The middle of an argument isn’t always right (evolution is 100% real, creationism is 100% wrong) but here it has relevance. We are tracked: ad cookies, flash cookies and targetted advertising driven by databases of behaviour do exist and contain a lot of information.

    Coming at it from another direction, ad networks are a common source of malware and flash is, to be kind, not the most secure part of the browser tool chain.

    So, what do we do? Well, let’s start by looking at what it takes to be really private or really secure

    Really private browsing

    How do you do really private browsing? Easy. Always use Tor and browse EVERYTHING in ‘Private Browsing’/’Incognito’ mode.

    Great. Until you get a trojan from downloading a “codec to view this content”. Then all your privacy is subverted at source. Which is why governments like this approach

    Doing this is also slow. Slow enough that for most people, the trade off is too high. Even for those for whom the privacy matters, it’s fragile. One mistake exposes your misdeads

    A moderately practical solution is to leave out Tor, but still use Private Browsing mode for everything. This leaves logs at the ISP, but none on the local device. However, both Firefox and Chrome share cookies within a private browsing session, so website A and website B will still see the tracking cookie from ad broker C. This means that avoiding tracking means always starting new sessions in addition to doing much of the things I discuss below. For my use, the extra privacy is far outweighed by the impracticality.

    This method also doesn’t improve security much. Flash is still enabled, ditto javascript so the attack surface is not reduced.

    So, to get really, really private browsing, we’ve going to have to combine this with really secure browsing

    Really Secure Browsing

    This is possible, but it’s not going to catch on.

    For a good idea of what is needed, and the software needed to make it happen, see the Invisble Things/Qubes blog

    The basic idea is that we run a large number of virtual machines, and are very, very carefully about how data flows between them. Then if one VM gets compromised, no other data is lost. By limiting ‘dangerous’ browsing to an untrusted VM we get security, and byusing Tor in every VM where we need privacy, we get privacy as well

    All in all, it’s not going to catch on for the same reasons we’re not all running Trusted Solaris. It’s too much of a pain for most users. If I was working on Top Secret Nuclear Explosions then this would be the way to go. But I don’t

    The Middle Way

    So here’s what I actually do. I start from these asumptions:

    1. I’m not trying to hide anything illegal, or worried about hiding from our ISP’s logs
    2. I wish to avoid advertising cookies and targeted advertising as much as possible
    3. For security, I want to control flash
    4. I want security against random ‘drive-by’ attacks, but am not trying to defend against a sophisticated, well resourced attacker such as a nation state security service. 

    So what do 

    1. Choose your browser and base OS carefully. I currently use Linux on Chrome
    2. Don’t run flash at all 
    3. Use extensions to control tracking cookies, advertising etc.

    Most of the security comes from point 1 and 2. Most ‘drive-by’ malware is targeted where it’ll make money. That means Windows, with some recent excursions into OS X. By running Linux I gain a good deal of security for free. I also use OS X for dealing with my digital photos, but by avoiding doing general web browsing within OS X, there’s no extra risk.

    For the browser, Chrome is my choice but Firefox would work just as well. Both of the other major browsers (IE9 and Safari) have reasonable basic security (we’ll pass lightly over IE < 9), but don’t meet my needs as they have no eco-system of extensions around them.

    Now, to get control over where our data goes, we need a set of browser add-ons.

    First up, the indispensable Adblock (Chrome, Firefox). When used with one of the auto-updating block lists, this will stop most ads from ever being shown. Even better, since the ads are never loaded, a malware infected ad broker can no longer attack us.

    Next up, Ghostery (Chrome, Firefox) . Cookies aren’t the only way of tracking. One pixel images, beacons and more are used to track visitors across sites. Ghostery, set to auto-update it’s block list, works in a similar way to adblock and blocks all of these. 

    Now we get into belt-and-braces territory: Many ad networks allow users to opt out of tracking by setting a cookie. The problem with this is that any time cookies are cleared, the opt-out has to be set manually, and there’s a lot of ad brokers to go and do this for. Keep My Opt Outs (Chrome, Firefox) ensures that these cookies remain set at all times.

    I mentioned that I don’t run Flash. This isn’t always practical. If you need it, a good solution is to use FlashBlock (Chrome,Firefox) to create a white list of sites which are allowed to run flash, blocking it everywhere else

    Finally, something I don’t do, but can be worth it for some people. NoScript (ChromeFirefox) works like FlashBlock but for Javascript. Using FlashBlock and NoScript along with the extension above gives a major increase in security, but, for me, the pain is to much. Nearly every website needs Javascript so the blocking becomes a pain very fast.

    So is it worth it?

     Like many things, it depends. For me, it’s worth it just to get a nicer web browsing experience with a reasonable increase in security. The lack of tracking is just a bonus.

     
  6. Comments
  7. 15:07 1st Sep 2011

    notes: 10

    tags: passwords

    Doing Passwords Right

    A student once told me, in all seriousness, that his password of “password” was secure because:

    It’s a double bluff. No-one would believe I’m stupid enough to use that as a password

    Yeah, right.

    The trouble is that passwords are hard. One password is easy, two ok but most of us need tens, if not hundreds, of passwords for all our different services. Work password, personal email, facebook, Google, ebay, three banks, that random quiz site, phishme.com…. 

    Then each of these sites will have a different password complexity/strength checker, work insists your password is changed every 30 days and on it goes.

    In attempting to deal with this, most people work their way down this sequence:

    1. I’ve thought of a good password. It’s “fred”.
    2. Oh dear. It’s rejected as too short. Let’s try “fredfred”
    3. Now it needs numbers. Try “fr3dfr3d”.
    4. Accepted

    And we’re all good to go until….

    1. The next site comes along. Now we have to have a special character as well. So let’s use ‘fr3dfr3d!’. 
    2. Now sign up to internet banking. Best use a different password. Ok. ‘G30rge!’. Done
    3. Now what about the credit card? ebills? Oh dear.

    The only way to deal with this and keep everything in a human brain is to have two or three basic passwords (say one for banking, one for login and one for other websites) and reuse them everywhere, with random variations to deal with different sites password policies. This way madness lies. The small variations cause endless problems and the sharing of password across sites means that one compromised site is a disaster.

    The solution: write your passwords down. As prohibited in every security policy ever.

    Use the paper, Luke

    By ‘write it down’ I don’t suggest you physically write it down in any way, but rather than you stop trying to remember passwords and use a password manager to store them.

    Password mangement (or “Password Safe”) software encrypts away all your passwords with one master password so that you now only have one password to remember, but your passwords are still safe from prying eyes. Unlike the ‘post-it-note under the keyboard’ approach.

    Once you stop trying to remember passwords, all sorts of good things happen:

    1. You can (and should) have a unique password for every single site or application. Even the silly ‘joke’ websites. Everything
    2. You can stop trying to think up passwords. Just let the password manager generate a random one for you. It’ll be impossible to remember (e.g. mine has just generated ‘eRxz%b3gtV’ for me) but it doesn’t matter. You never need to remember it

    And that’s it. Now you can have complex, unique passwords everywhere and also have less stuff cluttering up your brain and making you stupid. What’s not to like? Just do it.

    The Details

    That’s the basic principle, but like everything, the details matter.

    1. You’ll still have to remember your login password and a master password for your password safe. That’s only two passwords. Not so bad
    2. You may also want to remember your internet banking passwords.
    3. Remember that the strength of this whole system depends on the strength of the master password you set for your password safe. Since you don’t have to type it very often I suggest just going for a very long phrase (30-50 characters).
    4. Don’t use any random piece of software. Writing secure cryptographic products is hard and you want to be very sure that if you are putting all your passwords in one place that you haven’t just made it easy for them to be all stolen at once.

    Finally, most of us use many different computers over the course of a day so need these passwords everywhere. There’s two basic approaches:

    1. Let the software itself store the data in the cloud, or,
    2. Store the encrypted file on a sync service like Dropbox

    Either works. You’ll also want to have a copy of the program and your passwords on your phone for those times when you want to login into a site on a different computer/internet cafe etc. 

    All of the recommended products below can be integrated into your web browser as well to allow for seemless logins to everything web based (which is going to be 95% of everything for most of us).

    Recommended products

    • KeePass: Works on Windows and Linux. Supposed to work on OS X but I gave up waiting for Mono to install. Also has version for most phone OSs. Open Source
    • LastPass: nice, but costs to use on mobile. 
    • 1Password: those that use it love it. Works on Windows, OS X, iOS and Android. Costs.

    Edit: thanks to Max Spicer for prompting me to get off my arse, change my password management and actually write this up :) 

       
    • Comments
    • Sophos

      I’ve finally read the details in Travis Ormandy’s Sophail report. Oh dear.

      Sophos’ response is a classic.


      Tavis has questioned the performance of Sophos buffer overflow protection and made other statements questioning the quality of Sophos protection. Naturally Sophos is committed to continually improving performance and protection and regularly participates in independent third party tests. In fact, we consistently rank well in these tests.

      Or, to translate:

      We’re not going to comment on the details as they are too embarrassing and we don’t even come top compared to other a/v products

      I highly recommend the full report. It’s a little less dry than the average security paper. e.g.

      This guarantees that any attacker will simply give up writing their ret2libc payload, as they will be unable to concentrate due to uncontrollable laughter

      Other gems include the packer protection being so out of date that it was hard to find an old enough version of the packer to test it and the pre-execution analysis that hard codes constants so it only really works on Windows Server 2003 SP1.

      So what’s a defender to do? We knew already a targeted attack was likely to succeed. Sophos just makes it easier by allowing direct exploitation of the out-of-date embedded JavaScript engine.

      Are other a/v engines better than Sophos? If so, which? And how could the average (enterprise) purchaser do a serious evaluation?

       
    • Comments
    • "Wicked Problems"

      Over on Charlie Stross’ blog guest author Karl Schroeder introduces the concept of “wicked problems”. I recommend spending the time to read the whole article and the links in its first paragraph.

      It’s not a concept I’d come across before:

      But often, in the human sphere, there are what’re called “wicked” problems. In 1973, Horst Rittel and Melvin Webber defined a wicked problem this way:
      1. There is no definitive formulation of a wicked problem (defining wicked problems is itself a wicked problem).
      2. Wicked problems have no stopping rule.
      3. Solutions to wicked problems are not true-or-false, but better or worse.
      4. There is no immediate and no ultimate test of a solution to a wicked problem.
      5. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial and error, every attempt counts significantly.
      6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.
      7. Every wicked problem is essentially unique.
      8. Every wicked problem can be considered to be a symptom of another problem.
      9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution.
      10. The social planner who tackles a wicked problem has no right to be wrong (planners are liable for the consequences of the actions they generate).

      The examples given are the obvious ones: fiscal policy, climate change etc, but it’s also a useful insight to bring to security problems. We can divide security issues into two groups (if not cleanly, then in a way that gains insight):

      1. Non-wicked problems: does this patch crash my server? Does this exploit work? What’s the patch level of my server estate?
      2. Wicked problems: how should we trade off privacy online for physical security?

      Between these two, these a set of semi-wicked problems where much of the day-to-day difficulties in security policy come from e.g.

      1. if we lock down all our client machines really hard, is that worth the trade off in innovation?

      Problems in this class might not fit all the requirements above but will fit many of them E.g. 3, 9 and 10 seem very relevant here: often the person writing the security policy has no motivation other than to be as restrictive as possible, while the person doing the work wants to do the least possible.

      A good counter when confronted with the more technological end of things.

       
    • Comments
    • Installing Big Apps on Galaxy S Froyo

      Just in case anyone else gets this issue. The Samsung Galaxy S with Froyo can’t download apps bigger than 30Mb from the Market as /cache is only 30Mb. 

      Here’s the fix:  get z4root, root phone, then use z4mod to change type of /data from rfs to ext2 (aka Lag Fix). You’ll want to do these anyway if you haven’t already lag fixed the phone.

      Then in a terminal window:

      mkdir /data/cache
      umount /cache
      mount -o rw,remount /
      rmdir /cache
      ln -s /cache /data/cache
      

      Now install away from the Market.

      To Undo

      rm /cache
      mkdir /cache
      chmod 770 /cache
      

      and reboot.

      Important This fix will not persist across reboots and you will want revert this before rebooting. Once the app is installed it’ll run fine with /cache set back to normal

       
    • Comments
    • So, this here wedding thingy

      There was a wedding last week and it seems that lots and lots and lots of our users wanted to watch it…..

      The graph below shows streaming video traffic for the last week. The time scale is a little confusing, but the low point of the traffic corresponds to the small hours of the morning.

      Remember, this is on a 1Gb link. Since there is other traffic on the link, it’s fair to say that we’d have generated even more streaming traffic with a bigger link.

       
    • Comments
    • Links for 2011-03-21

       
    • Comments
    • Debian/Ubuntu two factor auth with Google

      Following the excellent guide from MNX Solution I’ve got two-factor auth working on my desktop.

      There’s a couple of things I thought worth noting that aren’t mentioned there. 

      1) You’ll need the pam headers installed and they aren’t by default.

      $ sudo apt-get install libpam0g-dev
      

      Then follow the instructions as given.

      2) When you edit /etc/ssh/sshd_config you’ll need to set

      RSAAuthentication no
      PubkeyAuthentication no
      

      to disable pub-key auth (at least for testing), since that will be tried before ChallengeResponse. For production use, enabling pub-key with a fallback to ChallengeResponse might be ideal.

       
    • Comments