Ben’s mumbling

You can prove anything with facts.

Malory Isn’t the Only Imposter in Infosec

So a tweet by Dr. Jessica Barker about imposter syndrom in infosec got me replying about how I’ve had imposter syndrome a bunch. Perhaps no more and no less than anyone else, we’ll see, but I said I’d write it up as the tweets was not ideal for it. (Sadly, for those following the conversation on twitter, I have had exactly zero wine so this may make sense, but not be as vitriolic as one hoped.)

It’s a long one, nearly 2500 words, and I’m sure it rambles, but this is a 9 hour flight with no wifi.

Comments are foolishly turned on. I look forward to the flames and doxing on IRC. |:

Who’s this clown?

To reference my favourite joke, I’ll give some background on who I am and how I got here. Perhaps more than I should. Humour me.

Back in the 90s, I somehow convinced my parents to eventually get a modem. My father brought this giant hunk of I think Hayes? 2400 baud metal device home. My mind was blown, computers talking to other computers over phone lines? What sorcery is this? From there, I fell in to the BBS scene. By way of fell, I mean, I would drag my Amiga 1200 to my parents room, by the phone socket, plug in my modem, plug my computer in to the amazing portable “boom box” that also had a 4inch black and white television in it, and dial up BBSes. As this was the only way I could get near a phone line. Ideal. Somehow from this, I am now here.

From that, as we all did, I got a faster modem (never a USR Courier, sadly, just a Sportster), and eventually fell in to the FidoNet scene. I cosysoped some other BBSes. In absolutely no way helped with copyright infringement, while running up huge phone bills. Ahem.

At one time, a friend I’d met on my not-so-local BBS said lets meet up. After some convincing of parents (I was a kid), I got on some trains and went to hang out with my friend. (I promise you this gets going). I found he had slightly more risqué interests than me in the world of modems and computers. So he dragged me along to a 2600 in London at the bottom of those elevators, then to webshack or McDonalds as was seemingly the tradition.

This is how I am now apparently a computer security engineer.

Utterly ridiculous.

From there we fall in to an amusing world of opening up BT phone cabinets for phone fun, hanging out with the silliness of people from IRC as they hooked their radio scanner up to their soundcard and decoded pager messages over the air (encryption? what’s encryption?).

I stopped using my Amiga, and moved on to Linux (a 1.x.x kernel was the first thing I used, I am that old), then OpenBSD. Got a girlfriend, called each other names from Hackers and was generally an idiot who’s very glad that bug bounties didn’t exist then, as my god, I would have submitted some embarrassing things.

I am not, as one would say, leet.

I can’t reverse stuff, I’ve not written exploits in C, I don’t think I ever finished Smashing the Stack for fun and profit, I am not some badass 0-day hoarding hacker.

But, having spent too long trying to secure Linux boxes from friends on IRC, and let’s just say, now and then having a shell on some machines that weren’t strictly mine (as in, at all), I was able to get a very low level job at a local ISP doing Unix support. After failing A-levels in pretty much everything including computing due to not handing in course work and wanting to write it in C. I dropped out of education (twice) and got a real job.

blah blah blah, linkedin etc.

Since then I’ve been somewhere between the three points of systems engineer (sysadmin, operations person, whatevs), network engineer (Firewalls, IOS, VPNs, IDS, BGP) and security engineer.

Why are you telling me all of this Ben, I am bored.

In 2011, I was working at Puppet nee Puppet Labs as an operations engineer in infrequently sunny Portland. A friend came to visit from far less sunny London and told me he was reading this book Kingpin and how it was cool and about hacking and stuff. I rushed out and bought a copy as it was very relevant to my past. The main person in it, as it’s non-fiction, used to run the Aracnids Snort rule mailing list. Which from all my IDS work I was well versed with. I had a real connection (ish) to this person. Reading it just blew my mind, it remains one of my favourite books, and prompted me to look around for a security job again. I found Etsy were looking for security people, and some how, they foolishly hired me.

So I started at Etsy in 2013. In my first team meeting, everyone on the team, about half a dozen people, said what they were working on. It was all Javascript this, big project that, something I’d never heard of this.

I WAS TERRIFIED.

I’d never used half of these things, I was completely convinced that if I didn’t learn Javascript within the next two hours I was sure to be fired. (I still do not know Javascript.)

Back then, the Etsy security team, at least in my half (infrastructure security) was the likes of Zane Lackey who has now started one of the most exciting security startups Signal Sciences and Mike Arpaia who later went on to join some startup called Facebook? and create and lead the amazing OSQuery project!

And then there’s me.

I literally spent at least the first year in that role thinking I was going to be discovered and fired for being a fraud. I was not ex-iSec. I had not presented at BlackHat. I did not know anyone in the industry.

It got to the point where I was starting to job hunt because I knew I would be found out that they made a mistake by hiring me. I couldn’t find 0day, I couldn’t write an entire huge revolutionary intrusion detection tool. I’m pretty good with sed but I’m not the best.

This lead to me becoming quite gloomy and down with the world. Living under a self imposed Sword of Damocles takes a lot out of you. Emotionally and mentally.

We hired more people. More recovering consultants from the good ship Intrepidus. I was sure the ruse would be up at any moment.

I have now been at Etsy nearly three and a half years. Either I’m really good at faking it, or I was wrong.

Why is infosec the worst?

I’ve had imposture syndrome before, pretty much every job to some degree, but it’s never been as bad or as long lasting as it has or even is, as it is in infosec. Why is that? Well I think a few reasons.

  • Infosec is very ego based.
  • Infosec has purely attack/defense driven.
  • There is someone actively attacking you, very often for money.
  • Governments are now doing it in new and interesting ways, thus scarier.
  • It’s becoming militarised, “Cyber” is a thing, so guns and death and more ego.

Infosec and ego

No real other part of technology have I experienced where it is so combative. Developers main battle is either against caffeine addiction, Jenkins or the compiler. Operations people it’s the pager and the hard drive gods. Network engineers, it’s well meaning farmers digging fields through their fibre lines.

Info has an attack versus defense culture, baked in to it from it’s very core. Everything from the language (exploit, owning) to the borrowing of far too much military terminology (blue team/red team, capture the flag, kill chain).

Hackathons and first to market aside, no other part of technology pits you so directly against other people and systems. It’s “I couldn’t get my code to compile” not “I got completely owned by so and so”.

And ego comes in to this. In the hiring or rockstar mindgame that is large events like BlackHat or PwnToOwn it pays to come out fighting. It’s a 80 billion dollar industry, the entirety of RSA Con is about how vulnerable and defenseless you are, unless you buy this expensive product from us.

History

Back in the 90s (and probably 80s, see Textfiles) when ‘zines were the coolest thing, they had entire sections doxing people, dropping entire mail spools, IRC logs shit talking each other. It’s like what I’m lead to believe a college locker room is like.

Gobbles, perhaps the finest example. Much loved, especially by me, in the early 2000s, with an amazing arsenal of skills and exploits, stormed through target after target, dropping far too private information about whomever to the full-disclosure mailing list. Later at DefCon 10, there’s this timeless video of gobbles and the Unix Terrorist calling people out for not being leet enough, etc.

Conferences

Speaking of DefCon, which is sadly a bastion of the industry, it’s change from being a bunch of hackers getting together in Vegas (in the off season, so it’s cheaper, and even warmer) to something resembling a professional security conference has added to this. The “all first time speakers have to do a shot on stage” because if you’re not drunk, you’re not a real man (I say this as an occasional borderline functioning alcohol) by the Goons, highlight the maturity of the intended audience.

A lot has been written and said, though not enough, about the mistreatment of women at conferences, especially DefCon. I cannot help but feel some of this must come from the sense of entitlement that having money, the ability to hack in to a bunch of stuff, and zero empathy would create.

As an industry, we should feel ashamed of this more than anything.

Entitlement

Back in 2013, BsidesSF was held at The DNA Lougne, at the end of the last day, there was a roast. A fine American tradition, where everyone gets on stage and insults the person in question. Seemingly ill-spirited in nature, but it’s all consenting and harmless fun. However this roast wasn’t against a person, it was against the infosec industry…

…But it wasn’t, it turned very quickly in to a lot of highly paid men, who work or worked as consultants and pen testers, complaining about report writing, “dumb users”, finding the same old bug again. Literally roasting absolutely everyone and everything bar the actual infosec industry.

This was the first time I walked out of a conference.

Smart assholes

The developer and more slowly, the operations world has been moving from the “10x crush it brah” super engineer with zero empathy and social skills that no one wants to work with, with realising that actually teams of good people are better. The whole DevOps movement and things like that have really pushed that agenda forward and it is only bringing good things.

Security is riddled with incredibly smart assholes. And for some reason, this is taken as fine.

I recall once, being with some members of my team, visiting another company. Two very senior members of said company were debating a topic very strongly and passionately, neither accepting each other’s point. This culminated in one of them saying they knew more, because they were on the board of a conference that specialised in it, the other saying they knew more as they wrote their dissertation on the subject. We all left vowing to never hire that kind of engineer at Etsy, because that would instantly destroy the kind of culture that Zane and Rich have built.

Don’t hire assholes, from Rich Smith’s Kiwicon 8 talk

Why is all this bad?

Is this bad? Well, I clearly think so or I wouldn’t have written so much on it.

The security industry as a whole has a major skills shortage. This only perpetuates the problem, as those capable are able to demand “rockstar” salaries and walk around like they own the place.

Unless we bring more people in to this industry from both an early age, other parts of the industry and retain them, we are not going to become more secure or solve problems. We are going to tread water and still be fixing CGI-BIN problems in 2020.

The tired argument of “users are dumb” and “I’m smarter than them” makes this gap larger. The majority of people who spend so long finding and writing these exploits in such things and saying that whomever wrote them is stupid is large source of irritation to me.

Most hackers could not write a web browser, a web app from scratch, would not attend the committee meetings to get a protocol to happen in the first place. Yet through some hard work and intelligence find a bug, that in some cases pops a shell, and they’re cleverer than the person who made it in the first place? That isn’t how reality works. It’s just sadly how infosec works.

Getting back to impostor syndrome

To wrap up. I think that the attacker/defender dynamic, how “attackers always win”, “defence needs to stop 100%, attack just needs to get in one thing”, and the kind of ridiculous sums of money throw around lead to there being a lot of fear in infosec. Some, I’m sure, will argue that this is where the rush comes in, of being the best. Beating everyone, and I’m sure there is a lot of that. That’s definitely the thrill in seeing your exploit work and that uid=0(root) gid=0(wheel) groups=0(wheel) appearing in your shell. But, I argue, if we want to actually as defenders, as an industry, hell as people paid to do a job, want to make things better, we need to check our egos a lot more, stop waving our cocks about, and work together like we’re the insanely highly paid man-children (and a handful of woman-children) that we are.

Or maybe I’m just bitter because I’m not very good.

A Dockery of a Sham

(This is a bit ragey. I’m not gonna link to anything/anyone, and I wasn’t even there, but this attracted my ire)

Tired of people hating on @Docker giving out @yubico Yubikeys at their conf. Yes it’s trusting USB devices at a conference, which us jaded security types are all “this is dumb and terrible, noobs” but really. This is a company working with the maker of the USB security device to give them to people at THEIR CONFERENCE to tie in with a security feature they’re adding to a product THEY MAKE that everyone has been complaining about it not having enough security. What do you want security industry. To cry down every attempt to make things better because it’s not perfect?

To sit on your high thrones of perfection about what you would or wouldn’t do in your plain text email, noscript browser, or help the masses get better.

Yubikeys make things better. 2FA makes things better. They’re not 100% but then, nothing in this life is.

Okay, let’s go full security joy here. What’s the threat model here? Docker are trojaning every attendees machine to what? Encourage them to use their product? Hope to pivot to an investors laptop and then get another round of funding from it?

Please applaud Docker’s efforts, not mock them.

Logstash and Vim

After my mature hissy fit on twitter.

Is there a grok pattern for logstash configuration files so they don’t look like perl? - @benjammingh

Eventually, once I picked my toys up from the floor (and apologised to @jordansissel), I figured I could do something about this.

So I found logstash.vim which does syntax highlighting for you. Ace, that works.

However, looking at the ftdetect for it, it relies on the filename ending in .conf and the first 10 lines ofo the file containing any of the main types of functions in logstash.

Which is a cute hack, but that doesn’t help me that much, as at Etsy we use this Chef thing, and a lot of our ELK configuration is in templates. These templates don’t end with .conf, they end with .erb, because they’re Rubby templates.

So I made this ugly hack, for ~/.vimrc:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
" Use this with https://github.com/robbles/logstash.vim
function Logstashft[]
    let logstash_rexeps = ('/Users/ben/src/chef/cookbooks/logstash/templates/default/.*logstash_.*.erb' , '/etc/logstash.*')
    let myfilename = fnameescape[expand['%:p']]

    for reggy in logstash_rexeps
        if myfilename =~# reggy
            setlocal filetype=logstash
            break
        endif
    endfor
endfunction

" Same as the hack below, but for logstash files.
autocmd BufRead,BufNewFile * call Logstashft[]

You statically define some regexps, and if the file matches them, it sets the filetype to be logstash. Now I can edit the templates and vim will do it’s best to make them colourful.

YAY!

Eh Vim

Someone recently asked about what vim plugins to use. As I’m somewhat against putting everything on github I figured I’d go the more painstaking path of blogging about them.

Vundle

1
2
3
  " let Vundle manage Vundle
  " required!
  Bundle 'gmarik/vundle'

Vundle plugin manager it’s this or pathogen but Vundle will download the things for you from github, so it just seemed easier. Just use one of them.

Airline

I used to use powerline to modify the status bar. Airline is the rewrite of it. In pure vimscript and fast.

It’s worth getting the powerline fonts for whatever console font you’re using. powerline/fonts. I’m currently trying out Mensch 2.0 and there’s this gist, which has a powerline version.

Tabs

If you have to code, you’ll need tabular which aligns text, correctly.

Tim Pope

No collection of Vim plugins is complete without some tpope goodness.

1
2
3
4
5
6
7
8
  Bundle 'tpope/vim-fugitive.git'
  Bundle 'tpope/vim-git.git'
  Bundle 'tpope/vim-haml.git'
  Bundle 'tpope/vim-markdown.git'
  Bundle 'tpope/vim-rbenv.git'
  Bundle 'tpope/vim-surround'
  Bundle 'tpope/vim-endwise'
"  Bundle 'tpope/vim-commentary.git'

The main ones in this are vim-fugitive for handling git functions (:Gblame and :Gbrowse being the most useful) in the current buffer and vim-surround for making changing things inside quotes or brackets easier.

Others

  • tcomment for toggling comment blocks in code.
  • matchit to make % matching more powerful.
  • YankRing Better handling of yanked text.
  • ack.vim and configured to use ag to quickly find things in filepaths.
  • syntastic syntax check 100s of files on load/save. Fiddly at times, but really worth it.
  • ctrlp ctrlp file finder. Yeah, opinions are split about this versus command-t or fuzzyfinder.
  • raindbow rainbow up the parenthesis, so functionA(functionB(functionC))) doesn’t look impossible.
  • supertab use Tab rather than ctrl-o ctrl-I can never remember.
  • dash look up documentation in the excellent Dash.
  • vim-deckset I’ve started using Deckset for making presentations.

I use a bunch more, but looking over it all, I haven’t a clue what half of them do.

Themes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  " Themes
  Bundle 'altercation/vim-colors-solarized'
  Bundle 'Lokaltog/vim-distinguished'
  Bundle 'tpope/vim-vividchalk'
  " Bundle 'tomasr/molokai'
  Bundle 'fatih/molokai'
  Bundle 'nanotech/jellybeans.vim.git'
  Bundle 'alem0lars/vim-colorscheme-darcula'

  Bundle 'chriskempson/tomorrow-theme', {'rtp': 'vim/'}

  " new as of 2015
  Plugin 'chriskempson/base16-vim'
  Plugin 'freeo/vim-kalisi'
  Plugin 'barn/Pychimp-vim.git'

For Python I use a fork of pychimp and “au FileType python colorscheme pychimp”.

But I’m never sure about them.

Pining for GPG to Try

So anyone’s who had to suffer through seeing one of my talks (and let’s be honest, we can’t get enough of them, can we?) you’ve have heard me talk lovingly about Yubikeys. They’re hardware tokens that can store crytographic tokens for HOTP and a bunch of other stuff.

Why do we care? Well the expensive ones, the Neos and the Neo-N can act as CCID smart cards, which means you can run the gauntlet of generating GPG keys on them. Cool GPG, whoo &c. I know, I know. I use GPG all the time, but let’s be honest @Moxie is right when he says it’s a philosophical dead end, buuuuuuuuuut we can do something useful with it.

Because Yubikeys support generating RSA keys, and one of the key possibilities in GPG is an “Authentication key”, we can generate a key on the Yubikey and use it for SSH! The joys of this is that the private key you generate on the key cannot be extracted (well you know, easily). So you can be fairly sure that someone authenticating with this key actually has the token, rather than has just copied ~/.ssh/id_rsa off somewhere. Now this isn’t a discussion on how to generate those keys, as there’s a tonne of pages already on that subject..

But anyway, you should do that. It limits your attack surface for SSH. just, you know, discovering SSH_AUTH_SOCKET, or going after ControlMaster sockets, or trojaning the ssh client… okay there’s still lots.

So what the hell is this about?

Automating PINs in gpg-agent

When using ‘gpg –card-edit’ to generate a key, the gpg-agent pinentry will prompt you for the PIN on the device. You only get 3 attempts at that, before you have to enter the admin PIN to unlock it. This is all well and good, but when you want to mass generate keys, for say, your entire organisation, sitting there entering PINs is a pain. So I wanted to do that in Python.

I tried numerous gpg2 options, such as --batch --passphrase-fd X, but that will only work for the first PIN. Which is not ideal if you want to change them.

So in the end, I started down the path of making my own pinentry script to send the PINs that I specify, just when they’re being generated.

You’d think this was easy to do.

I fell down a rabbit hole of looking as the Assuan protocol which is how gpg-agent and pinentry communicate. When through some violent Googling and kind assistance from @antifuchs, I found you can just fake the pinentry end somewhat and just specify what you want. Finding pinentry-emacs was a huge help here.

I tried making a script to send back a PIN each time and change it during the running of the script, but it executes a new invocation of the script every time. I also tried setting it via environment variables, but they’re not exposed in the right place, because it’s called via gpg-agent. Madness.

In the end, I went via a horrible horrible hack of having a state file, which is updated as the script runs. Check out pinentry-hax.

Then tell gpg-agent to use that pinentry, and tell the agent to reload it’s config.

1
echo "pinentry-program pinentry-hax" >~/.gnupg/gpg-agent.conf && echo RELOADAGENT | gpg-connect-agent

Now you have to have something actually writing out the “$TMP/.some_pretend_ipc” file, with the right things in it. Here’s the extract from the Python script I’m using

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
def do_a_pin(ipc_file, oldpin, newpin):
    """
    for changing pins, or actually for generating keys (where
    oldpin and newpin are actually pin and adminpin)
    """

    # Yup, this is happening. A FILE BASED IPC METHOD.
    if os.path.exists(ipc_file):
        os.unlink(ipc_file)  # safety delete.

    old_umask = os.umask(0o077)  # safety umask!
    try:
        with open(ipc_file, 'w') as f:
            f.write('round=0\n')
            f.write("oldpin=%s\n" % oldpin)
            f.write("newpin=%s\n" % newpin)
    finally:
        os.umask(old_umask)

    return True

Then, when you call gpg, and it asks for the password, you can specify them with that function and then pinentry will supply them to GPG when asked. Why is this useful? Because then when you send commands to gpg, you can generate the thing as a whole.

1
2
3
4
5
6
7
8
9
10
11
12
pin=1234
adminpin=12345
do_a_pin('/tmp/.some_pretend_ipc',pin, adminpin)

in_fd, out_fd = os.pipe()
cmd = 'gpg2 --command-fd {fd} --card-edit'.format(fd=in_fd)

with Popen(shlex.split(cmd), pass_fds=[in_fd], stdin=PIPE) as p:
   os.close(in_fd)  # unused in the parent

   with open(out_fd, 'w', encoding='utf-8') as command_fd:
        command_fd.write("""commands to send to GPG, that ask for a PIN""")

The result is you can, from a python script, with some terrible hacking, automate generating RSA keys on Yubikeys, without having to sit there running every command and typing in every PIN.

I hope this is useful for someone. If there’s a more elegant solution, I’d realllllllly love to hear it. Drop me an email at blog@thedomainofthisblog, or shout at me on the Twitters at @benjammingh.

Enabling Click to Play

Click to what?

Click to play, it’s a way of forcing plugins in your web browser in to gaining consent from you in to running. Why is this important? Well, malicious web pages can do a lot with them. Adobe Flash Player, the main one I’m about to talk about, has had a very troubled security history and is generally regarded as one of the most vulnerable bits of software out there. Then why do we want it at all? Well, Flash Player is used for a lot of streaming websites and for all kinds of media and music playing. Though if you’re using Chrome, not the majority of YouTube. Youtube now uses just HTML5 video for anything that it can convert to the right format. See YouTube’s HTML5 page

So rather than just disabling it out right. Or hoping that you won’t accidentally land on page that’s trying to exploit a flaw in the software somehow, there’s a happier middle ground that trades a tiny bit of slickness of use for a lot of security. Click to play works by requiring you to right click on a flash element and pressing “play”, that’s it. Then it loads that individual plugin and it continues as per exactly normal. All this means is that sites cannot surprise you with Flash elements.

How vulnerable is Flash? Just how dangerous is it? Well it’s in the top 25 for “Most vulnerabilities ever”. It’s heavily used for Watering hole attacks and Spear Phishing. It, along with Java, are the two most vulnerable and susceptible elements of modern web browsing, with the browser itself a distant third!

Click to play in action!

This is how Adobe’s Flash Player page looks with click to play enabled.

Then you just right click on it, go to “Run this plug-in”

And boom, there’s your Flash content.

So how do I get this?

To enable this on Chrome, you just need to change the content settings, which thankfully Chrome make pretty easy.

First just go to Preferences (or press ⌘ +,) to get to the settings page.


Then type in “click to play” in the settings search. Chrome should point you towards where it’s found that setting. Click on “Content settings…“


Scroll down until you get to “Plug-ins”


Then for “Plug-ins”, select “Click to play”.


Then press “Done”.

Now you should be peachy!

Exceptions

Yeah, exceptions happen, but they’re lovely and easy to do too. Say for Spotify, you need to add “play.spotify.com” as it loads a hidden Flash element.

To do that you just go back to “Content settings” as above, then go to “Manage exceptions…”

Add in ‘play.spotify.com’ and press return. Change Behaviour to “Allow”.

Then just hit done (and then done again), and you are… done!

Exceptions are so rare, in my experience, that you shouldn’t have to do this. (and use the native Spotify client as it’s better…).

Testing!

I’d suggest going to This classic song to test if YouTube is still working for you.

Then head to Adobe’s version page to test if it’s working, it should look like this, with each of the grey elements which you can now click to play!

Where can I learn more?

1980s Exfil With Zmodem

A common way of getting tools on to a machine, or exfilling data is to encode it in some way and paste it in or out, something like xxd or base64. So you don’t have to open up yet another channel, in or out. A wget outbound or scp in/outwards would run the risk of triggering more IDS.

So from seeing @phreakocious be a network engineer and do cool with zmodem, it got me thinking. I would totally exfil via zmodem (as it’s way better than xmodem and kermit!). There’s a bunch of people already doing this in iTerm2 on OSX, so it’s useful in general, but I’ve never heard of it being used for this. No, no one say SCADA because it uses modems.

Because it’s your terminal parsing it, you don’t need need to worry too much about the quality of your shell; netcat is adequate.

The saving grace here is the fact that ‘rz’ and ‘sz’ are never used any more, because it’s 2014. So them being run, if they’re even installed, is a strong signal something is up. But a fun hack.

“If they think you’re crude, go technical; if they think you’re technical, go crude. I’m a very technical boy.”

JSON and the Arguments

What’s the beef with JSON?

So recently, I may have been shouting about JSON and how you shouldn’t use it for configuration files and using the right language tool for the job. So what’s my big problem with JSON? Well actually I don’t have a problem with JSON. JSON has freed us mostly from XML, and the horrors of SOAP.

If you’re writing a webapp, you’re rocking some REST, then JSON is great (though msgpack is better if care about performance). So why am I shouting? Well, it’s because HUMANS SHOULD NOT BE WRITING JSON so please stop using it for configuration files!

I don’t want to pick on projects I like, (Sorry @jordansissel, I love your work you just document this fact) but Lumberjack for example even points this out in their documentation example. You can’t put comments in it, I can’t help but feel that’s hostile towards the consumers of the software. You know people want comments, it only makes sense with comments, but JSON. Imagine writing code without comments (no no, good code, code you’d share), I don’t see configuration files as all that different.

Chef does the same, why are there so many .json files? To make it easier for the delevopers of Chef? To make it easier for the chef-server? At whose expense? Oh just the majority of people. YAML I kinda disliked until I was thrust in to using a marshalling language as if it was configuration data…language as if it was configuration data…language as if it was configuration data…

A tonne of people suggested running it through a pre-processor. I say to you, “Those who forget the past are doomed to repeat it.”

Also, in the world of 2014, we have this thing called configuration management now. Have you tried tempesting JSON. For sure, you can just JSON.pretty_generate() in to the file, should you need to, but you don’t make any other config files that way.

The hostility of the following:

1
2
3
4
5
6
7
{
    "array": [
        "one",
        "two",
        "three",
    ]
}

That’s fine right? Oh no, there’s a trailing comma, so the world blows up.

Replacing it

Many people have since pointed me at HOCON which seems like a lovely compromise, but people need to use it, though I still don’t think it’s the right thing to do.

I voiced my support for .ini, yes that ini from Windows… I know, it’s crazy talk. I don’t think it’s perfect, but the benefits of it are:

  • Python stdlibs supports it
  • It’s simple to read for a human or a fangled adding machine.
  • You can template that in anything, even bash.
  • You can grep it!

I don’t think it’s perfect, and I’d like to hear other options. I don’t think Apache’s file format is great, nginx’s is a little better but not great.

Please, flame me in the comments, or shout at me on Twitter.

Printing to PDFs in Mutt and the Dream of the 90s

It’s tax season, this means sending documents to accountants proving you weren’t insider trading. I use mutt because I am firmly stuck in the 90s, so printing things to PDFs that look decent enough and are not plain text is a bit of a mission (it seems).

Muttprint

Rather than ‘ lpr’ which will look crappy (and I want it in PDFs anyway), there’s muttprint, which sounds promising, if we ignore the Perl.. Rather than fight that, I added the Homebrew Formula to @mrtazz’s repo at GitHub.com. Simply “brew tap mrtazz/misc” and “brew install mrtazz/misc/muttprint” to build and install muttprint.

Also required is the tiny package of ghostscript, which again, I install with ‘brew install ghostscript’.

And some ugly perl to prevent

[laptop:~]% muttprint
Can't call method "convert" on an undefined value at /usr/local/bin/muttprint line 1915.

Just do a “perl -MCPAN -e ‘install Text::Iconv’”

LaTeX

Wrong latex… LaTeX is some hateful document typesetting from the 1800s, used by academics and… Unix types. Anyway, as muttprint converts to this in the middle, you need to install it, lest it bomb out with:

Muttprint Version 0.72d -- Error
======================================================================

Line 625: Latex didn't work. There's no DVI file. If you write a
bugreport, please include a mail where printing fails.

I cheated and installed the immense MacTex package, which is only a mere 4gigs… sigh.

mutt

Now we’re getting somewhere. Now my test of “muttprint -p - <saved_email ps2pdf - $HOME/muttprint.pdf” was actually making a PDF!

So rather than just clobbering that file, let’s keep adding -number to it, OSX style, with a script.

#!/bin/sh

PATH=$PATH:/usr/local/texlive/2013/bin/x86_64-darwin/

if ! [ -e "$HOME/Desktop/muttprint.pdf" ]
then
    FILE="$HOME/Desktop/muttprint.pdf"
else
    count=1

    while [ -e "$HOME/Desktop/muttprint-${count}.pdf" ]
    do
        count=$(( $count + 1 ))
    done
    FILE="$HOME/Desktop/muttprint-${count}.pdf"
fi

muttprint -p - | ps2pdf - "${FILE}"

And then set it in mutt:

set print_command="prettymuttprint.sh"
set print_split

And now when I print I get PDFs on my desktop…

Getting rid of that damn penguin

Configure muttprint to not add a tux:

[laptop:~]% cat ~/.muttprintrc
PENGUIN="off"

GPG and Openssl and Curl and OSX

Those playing along at home may remember pain with GPG well, that appears to have gotten more annoying.

What?

Libcurl, and gpg2 and openssl… Or so I assumed.

1
2
3
4
5
6
7
8
9
10
11
12
13
[laptop:~]% gpg2 --verbose --keyserver-options=debug,verbose --search foo
gpg: searching for "foo" from hkps server hkps.pool.sks-keyservers.net
gpgkeys: curl version = libcurl/7.35.0 SecureTransport zlib/1.2.5
gpgkeys: search type is 0, and key is "foo"
* Hostname was NOT found in DNS cache
*   Trying 209.135.211.141...
* Connected to hkps.pool.sks-keyservers.net (209.135.211.141) port 443 (#0)
* SSL certificate problem: Invalid certificate chain
* Closing connection 0
gpgkeys: HTTP search error 60: SSL certificate problem: Invalid certificate chain
gpg: key "foo" not found on keyserver
gpg: keyserver internal error
gpg: keyserver search failed: Keyserver error

But… that key is right?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[laptop:~]% openssl s_client -CAfile ~/.gnupg/sks-keyservers.netCA.pem -verify 6 \
            -connect hkps.pool.sks-keyservers.net:443 -servername hkps.pool.sks-keyservers.net
verify depth is 6
CONNECTED(00000003)
depth=1 C = NO, ST = Oslo, O = sks-keyservers.net CA, CN = sks-keyservers.net CA
verify return:1
depth=0 C = NO, ST = Oslo, O = keys2.kfwebs.net, CN = keys2.kfwebs.net
verify return:1
---
Certificate chain
 0 s:/C=NO/ST=Oslo/O=keys2.kfwebs.net/CN=keys2.kfwebs.net
   i:/C=NO/ST=Oslo/O=sks-keyservers.net CA/CN=sks-keyservers.net CA
 1 s:/C=NO/ST=Oslo/O=sks-keyservers.net CA/CN=sks-keyservers.net CA
   i:/C=NO/ST=Oslo/O=sks-keyservers.net CA/CN=sks-keyservers.net CA
---
[snip...]

So wait, openssl says it’s fine, but when I use my homebrew gpg2, that pulls in libcurl, it should support SNI, right?

1
2
3
4
5
6
7
[laptop:~]% otool -L /usr/local/Cellar/gnupg2/2.0.22/libexec/gpg2keys_hkp                                                         1
/usr/local/Cellar/gnupg2/2.0.22/libexec/gpg2keys_hkp:
    /usr/lib/libresolv.9.dylib (compatibility version 1.0.0, current version 1.0.0)
    /usr/local/lib/libgpg-error.0.dylib (compatibility version 11.0.0, current version 11.0.0)
    /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
    /usr/local/opt/curl/lib/libcurl.4.dylib (compatibility version 8.0.0, current version 8.0.0)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)

So that’s using the libcurl that I pulled in via homebrew, so that’s sane, right…

1
2
3
4
5
6
7
8
[laptop:~]% otool -L /usr/local/opt/curl/lib/libcurl.4.dylib                                                                      1
/usr/local/opt/curl/lib/libcurl.4.dylib:
    /usr/local/opt/curl/lib/libcurl.4.dylib (compatibility version 8.0.0, current version 8.0.0)
    /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 855.11.0)
    /System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 55471.0.0)
    /System/Library/Frameworks/LDAP.framework/Versions/A/LDAP (compatibility version 1.0.0, current version 2.4.0)
    /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.5)
    /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1197.1.1)

Umm, what? It’s not using openssl, it’s using Security.framework! A quick ‘brew info curl’

1
2
3
4
5
6
7
8
9
10
11
[laptop:~]% brew options curl
--with-ares
    Build with C-Ares async DNS support
--with-gssapi
    Build with GSSAPI/Kerberos authentication support.
--with-libmetalink
    Build with libmetalink support
--with-openssl
    Build with OpenSSL instead of Secure Transport
--with-ssh
    Build with scp and sftp support

Oh, so I can build it with openssl support and not Security.framework…

If I import the sks-keyservers.netCA.pem cert in to Keychain.app, and try again:

1
2
3
4
5
6
7
8
9
10
11
12
[laptop:~]% gpg2 --verbose --keyserver-options=debug,verbose --search foo
gpg: searching for "foo" from hkps server hkps.pool.sks-keyservers.net
gpgkeys: curl version = libcurl/7.35.0 SecureTransport zlib/1.2.5
gpgkeys: search type is 0, and key is "foo"
* Hostname was NOT found in DNS cache
*   Trying 208.89.139.251...
* Connected to hkps.pool.sks-keyservers.net (208.89.139.251) port 443 (#0)
* TLS 1.2 connection using TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
* Server certificate: sks.mrball.net
* Server certificate: sks-keyservers.net CA
> GET /pks/lookup?op=index&options=mr&search=foo HTTP/1.1
Host: hkps.pool.sks-keyservers.net

Oh look, it works. I thank @mrtazz for pair programming with me. (this is mostly a brain dumb, but if you can’t gpg, this may help!)

Update!

Yeah, there’s STILL MORE BROKEN! So once I realised that the issue wasn’t quite that simple. Gnupg was using gpg2keys_curl, which was using libcurl. The libcurl it was using was the system libcurl, which uses Security.Framework!

1
2
3
4
[laptop:~]% otool -L /usr/local/Cellar/gnupg2/2.0.22_1/libexec/* | grep -Ei 'curl|ssl'
/usr/local/Cellar/gnupg2/2.0.22_1/libexec/gpg2keys_curl:
    /usr/lib/libcurl.4.dylib (compatibility version 7.0.0, current version 8.0.0)
    /usr/lib/libcurl.4.dylib (compatibility version 7.0.0, current version 8.0.0)

So I had to build a homebrew libcurl with openssl (which I already had) and then make gpg use that libcurl, rather than system libcurl.

1
2
3
4
5
6
7
8
9
10
11
12
13
[laptop:Formula]% git diff
diff --git i/Library/Formula/gnupg2.rb w/Library/Formula/gnupg2.rb
index 3dc14e4..4b7c4f1 100644
--- i/Library/Formula/gnupg2.rb
+++ w/Library/Formula/gnupg2.rb
@@ -51,6 +51,7 @@ class Gnupg2 < Formula
     if build.with? 'readline'
       args << "--with-readline=#{Formula["readline"].opt_prefix}"
     end
+    args << "--with-libcurl=#{Formula["curl"].opt_prefix}"

     system "./configure", *args
     system "make"

Then rebuild gnupg2 to and it will use the libcurl from homebrew, which will then use openssl from homebrew, which you can then specify certs to!

Remind me again why no one uses gnupg?