Form letter response for engineering recruiters

Like every other breathing engineer in the Bay Area, I get a lot of cold emails from random recruiters. While I haven’t yet replied to one of those messages with the below, I sure am tempted at times. Feel free to modify this template for your own purposes!

Corollary: if you are a recruiter, and you find that your reach-out emails would tick some of these checkboxes, maybe it’s time to reconsider your script.

Hello,

Thank you for your note. I am not interested for the following reason(s):

[ ] You do not include the specific name of the company you represent, which a) leads me to believe you are just canvassing and b) limits my ability to filter based on companies I don’t care for

[ ] You do not call out the specific skills I have that make me a “perfect fit”

[ ] You refer to skills or interests a) substantially in my past and/or b) clearly not part of my current role or expertise

[ ] If your CEO/CTO/VP Eng/Director of ____________ thought my background sounded good, he/she should have emailed me personally

Additionally:

[ ] I will not send you any leads because I am already personally recruiting all the good people I know

[ ] Your request that we start with a “quick call” shows a clear disregard for the work style of engineers

[ ] You appear to have misspelled my name despite having my email address to use as starting point

Advertisements

Lasercut Dual Vertical Cinema Display Stand

front
Front view
back
Back view

Last week was HackWeek at Square, and as usual I took the opportunity to stretch my maker legs. This time I decided to tackle a practical problem that’s on my horizon – or perhaps more accurately, in my peripheral vision.

My current workstation has two Apple Thunderbolt Displays. This is, as you might imagine, awesome. There’s tons of desktop real estate to lay out browsers and code and terminals. However, this setup does have a dark side: it’s 50″ wide! Not only does this require me to pan my head to see the far edges of the most distant windows, but it’s actually wider than the desk space allotted to me. As soon as someone moves into the vacant spot next to me, one of these bad boys will have to go.

Faced with the prospect of giving up one of my beloved monitors, I hatched a plan to rotate them both from landscape to portrait. The traditional way to achieve this sort of thing is through the magic of the VESA mount. A lot of monitors have this built in, but not Apple displays. (After all, who wants to see a bunch of extra screw holes on the back of their perfectly sculpted monitor?) Apple sells an add-on VESA mounting kit, which is apparently an enormous pain in the butt to install. (You know it’s a bad sign when a how-to includes fast motion.) Luckily, the Square IT team had a pair of the older Cinema displays with the

VESA mount already attached so I could experiment.

Once your monitors are ready to be mounted, you need a stand. There are commercially available stands of all sorts, but that’s no fun, so instead I designed a custom stand using my favorite tool of choice, OpenSCAD. I wanted something that would be cheap to fabricate and easy to assemble that would still turn out handsome and sturdy. This led me to choose laser-cut birch plywood and the standard T-slot style screw-together construction.

By far the hardest part of this project was getting all the geometry correct. I wanted the monitors to come together very precisely, so I had to lay out the mounting points by carefully computing a bunch of different angles. There were more than a few head-scratcher moments spent hunched over a notepad. (Lesson learned here: start your sketches much bigger than you expect if you’re planning to add a lot of little annotations for angles and segments.)

This version seems to work pretty well, though I’m still planning to to a nice sanding, staining, and clear-coating pass to make it extra presentable. I also set my personal best for time from conception to realized prototype – I started sketching on Tuesday afternoon and had the stand built by noon on Friday.

As usual, all the project files are available on my GitHub page. If you’re thinking of making one, let me know!

Custom standalone programming fixture

I spent my evenings over Memorial Day weekend working on a customized fixture designed to make programming and testing the electronics of our Question Block Lamp really easy. As part of our plan to bring the lamps back into production, we decided that a custom programming fixture would go a long way towards helping our outsourcing partner get exactly what we want quickly and without the difficulty of communication. (Emailing back and forth to China is actually pretty expensive – a simple back-and-forth exchange can take days, thanks to the time difference.)

Once we decided to build this device, I jumped in and started designing. I had a few clear design criteria:

  • It should be durable. It will be used to program thousands of units. (Hopefully, tens of thousands!)
  • One part or another will ultimately break down, so it has to be modular and easily repaired,  particularly by someone other than me.
  • It should be incredibly intuitive to use. After all, it’s not even clear that the people using it will speak English!

Here’s what I ended up building:

2013-05-24 23.21.00

The interface

The interface is very simple. The strip of LEDs down the right hand side indicate the state of the programmer. The top one is just power, so you know it’s on. The next is the “contact” indicator, which lights up when a target board is properly connected. After that is “working”, for when the programmer is programming, and then “success” and “failure”. I etched the descriptions next to the LEDs, but used different colors so that it’s clear even if you don’t read the text. You start the programmer by pressing the only button on the faceplate, marked “Go”.

The enclosure

The exterior of the device is laser-cut 3/16″ acrylic. We just happened to have clear laying around, but I think that a more opaque color might work out better. I used standard T-slot style construction to put it together, with the exception that I used long screws from the top to the bottom to make a sort of sandwich. All the fasteners are #4-40, which I’ve found to be a great size for these small-scale enclosures.

A big part of the combined electronic and enclosure design process was rendering the whole thing in OpenSCAD, my 3D design tool of choice. I personally find it really crucial to be able to visualize how all the parts will go together before I settle on details of a design. Here’s the final render I ended up with:

standaloader_render

The hardware

On the inside, the components are simple by design. The brain of the whole operation is just a plain old Arduino Uno. I chose to use an Arduino as the driver for the programmer for a few reasons: I’ve used the ArduinoISP sketch a lot in the past very successfully; there was already existing code to convert an Arduino into a standalone programmer; the USB port and barrel jack were the only connectors I needed; and Arduinos are completely ubiquitous and easily replaced.

In addition to the Arduino, there are a few custom components. First, there’s a shield I designed that mates with the Arduino and provides all the connection adapters to the two other peripherals. It also has a handful of resistors for lighting the LEDs and a capacitor to suppress the auto-reset that occurs when a computer connects to the USB port – a must-have when operating as an ArduinoISP. Next is the interface board, which is basically just an LED array and the Go button – not much to see there.

The most interesting and challenging to fabricate component is the pogo pin array. I took inspiration on how to make this thing from a number of other pogo pin projects I’ve seen across the web. I made a point of breaking this out into a separate module so that I could isolate “fragile” electronics like the Arduino from any sort of mechanical stress. I found that setting the pin height was a little fidgety, and I’m going to experiment with alternative techniques for that the next time around.

The code

I based my programmer code very heavily on the existing AdaLoader project. However, there were a number of tricky issues I had to chase down to get it to work in my application. The original version was designed for flashing bootloaders onto ATMega168 chips, and there are some assumptions baked in to match that choice of target. Since I am flashing ATTiny44 chips instead, I needed to figure out the mismatches and update accordingly.

The first step was spending some time with the ATTiny24/44/84 datasheet to get some information like the chip signature. Next, I had to capture the hex file of my compiled code. The Arduino UI creates this file already, but it puts it in a random location every time. Finding it is pretty easy if you turn on the debug logging feature. (Basically, just set upload.verbose=true in your preferences.txt file.) After that, the path to the hex file is displayed in the window at the bottom every time you build/verify.

With the hex file in hand, I ran into my first real issue. For some reason, even though my program was actually laid out in contiguous memory, the hex dump produced by the Arduino IDE broke it up a bit towards the end. The AdaLoader code didn’t like this – it expected every hunk of data in the hex dump to be full and got confused when it wasn’t. I ended up writing a short Ruby script to transform the hex file into a clean, contiguous dump. I couldn’t quite figure out what I was doing wrong when trying to calculate the line-ending checksums, but I wasn’t really worried about data integrity between my laptop and the Arduino, so I ended up disabling that feature.

At this point I ran into my second issue – really, a pair of issues. What I was seeing is that the flashing would complete, but when the programmer read the bytes back to verify them, it failed, saying that the values were wrong. Uh oh. This was a real head scratcher for a while, so I spent some more time reading the datasheet. The first issue I found was that while the ATMega168 has a 128-byte flash page size, the ATTiny44’s was only 64. That was concerning, so I changed the settings, but the problem persisted. After reading the code very carefully for a while, I managed to debug it to point where it looked like it was flashing two different chunks of data to the same memory address. In fact, there was an odd pattern of alternating correct addresses and incorrect addresses aligned on 128-byte boundaries. (0x0 -> 0x0, 0x40 -> 0x0, 0x80 -> 0x80, 0xC0 -> 0x80…) This turned out to be because the target address was being masked with a pattern that aligned on 128-byte boundaries – clearly a relic of the code’s prior purpose. I just removed the mask altogether and all of the sudden everything started working!

What I have now is a device that can be powered by a simple wall wart and will program an un-flashed board in about 5 seconds, much faster than if we were doing it with the computer attached. Woo hoo!

There are a few next steps for this project before I’m considering it done. The programming part of the device is important, but so is the testing part. I’d like to figure out how to make the programmer put each newly-programmed board through it’s paces briefly so that we can identify defective boards early, before they are glued inside finished lamps. Also, there are a handful of places where the part tolerances of the laser-cut parts aren’t perfect, and making another copy gives me the opportunity to correct those mistakes. The version that I end up sending to China should be a bit more polished and feature-packed.

How to impress me in a coding interview

At Square, candidates are expected to actually write code during the interview process – on a computer, not on a whiteboard. As a competent software developer, this kind of interview should feel like a gift. Your skills are going to be measured more or less directly by doing the very thing you’ve been doing all these years!

So how do you impress me – or any interviewer – in a coding interview? Here are a few things I consider when pairing with a candidate.

It’s about the process

First off, let’s deal with a myth: the interview is not about the answer to my question. Sure, I’d prefer that you get it right, but I’m far more interested in seeing how you work to get there. I want to see how you decompose problems, conceive solutions, and debug issues as they arise. Make sure you take the time to show me how you’re getting to the answer.

It’s about communication

A big part of the development process on a healthy team is communication. Ideally I’d like to leave the interview feeling like you’re someone who asks good questions and is receptive to feedback and external insights. A great way to blow this portion of the interview is to hunch over a notepad and scribble quietly while I wait. If you must do something like this, be prepared to explain your thoughts thoroughly when you’re done.

Don’t go it alone

My interview exercise is hard. I fully expect that you might need a hint at some point. I even enjoy working with you to debug the problems you hit as you work. But if you never ask, you’re not going to come out ahead. Coding on a real team often involves working on something that you don’t understand, but that the person sitting two seats down knows in detail. You should be self-sufficient only insofar as it makes you productive; spinning your wheels trying to go it alone is just indulgence.

You should run the interview

Don’t make me march you through all the steps. Once I’ve explained the exercise and you’ve started coding, it’s best if you drive. If you need help, ask. If you think you’re done, propose more test cases or ask for my thoughts. The best interviews I’ve had are more like working with the candidate than interviewing them. If you can do this in an interview setting, then I can believe you’ll work the same way when you’re on my team.

Use the tools

I am consistently impressed when a candidate is able to drop into a debugger or quickly access documentation in the IDE of their choice. It’s an aspect of professional mastery that will be extremely important to your day to day productivity. The inverse is also true – if you choose to write C++ in vim and don’t know how to save the file, it’s going to cost you.

No bonus points for doing things the hard way

One of the key attributes I select for in coworkers is pragmatism. Sometimes this means choosing an ugly but expedient solution instead of an elegant but expensive alternative. If you know when to make these tradeoffs, you’re someone I want to work with. A lot of candidates feel pressure to give me “perfect” solutions right away, but I’d rather see an answer than an incomplete, perfect one. Plus, I love it when we can iterate on the “quick and dirty” solution to build a refined version. This is how software works in the real world, and I love when candidates aren’t too self-aware to just go about it casually.

Moving on

In a turn that few will have seen coming, this coming Friday will be my last day at Rapleaf. After five years, I’ve decided that it’s time to move on and seek new challenges. To that end, after much consideration, I will be joining Square to help them scale up their analytics efforts. I’m incredibly excited to get to work with a new team on a totally new problem domain, and I believe I have a ton of value to add to the company.

This decision is exquisitely bittersweet, though. It’s difficult to even begin to describe how spectacular my time at Rapleaf has been. I found it telling that when I tried to sit down and write a summary of my experience for my resume, it seemed impossible to summarize all that I had learned and done. This is still totally insufficient, but I’ll boil it down to this: I was given the opportunity to make many amazing software systems; I was allowed to grow into a member of the broader engineering and open source community; I learned the meaning of scale; I got to be a real owner of the business’s direction and purpose; and I played a role in building a world-class team.
It is this last point in particular, the team, that puts the tears in my eyes. I have never before had the opportunity to work with so many brilliant, hard-working, interesting, and just generally nice people, and I am humbled and honored to have been counted among their number. My greatest worry – one which I consider not wholly irrational – is that the team I’m leaving behind is in fact of the rarest kind, and I’ll spend the rest of my career hoping to rebuild something as great.
To everyone I have worked with over these last five years, thank you for everything you have taught me. You are what has – and will continue to – make Rapleaf great. 
Let’s do this again some time.

Open Source, Forking, and Tech Bankruptcy

Open source software is a part of most of the things I do day-to-day. I use a ton of things made by others: Hadoop, Cascading, Apache, Jetty, Ivy, Ant – the list could literally go on for pages. But I also use and develop some things that I’ve built that have been released into the public. I contribute to Thrift frequently, and have released Hank, Jack, and other projects as part of my work at Rapleaf.

Working with so much open source software has given me lots of opportunity to develop perspective about how companies should engage with open source projects. In this day and age, nobody is going to counsel against using open source, since it’s an enormous productivity booster and it’s everywhere. However, there are some different schools of thought about how you should use and contribute to open source.

One way of using open source is to just use what’s released, never make any modifications, and never make any contributions. For some projects, this is perfectly fine. For instance, I find it hard to imagine making a contribution to Apache Commons. Everyone will take this approach on some projects, particularly the ones that are mature and useful but not mission critical: they’ll never produce enough pain to merit fixes nor produce enough value to merit enhancements.

However, the above model only works well on projects that are very stable. Other projects you’ll want to use while they are still immature, unstable, and actively developed. To reap the benefits, you might have to roll up your sleeves and fix bugs or add features, as well as dealing with the “features” introduced by other developers. This is where things get tricky.

There are two basic ways to deal with this scenario, which I think of as the “external” and “internal” approaches. The external approach involves your team becoming a part of the community of the project, contributing actively (or at least actively reporting bugs and requesting features), and doing your best to hang onto the bleeding edge or commit to using only public releases. The “internal” approach involves you picking an existing revision of the project, forking it into some internal repository, and then carefully selecting which upstream patches to accept into your private fork while mixing in your own custom patches.

Both of these options are imperfect, since either way you’re going to do a lot of work. A lot of companies see this as a simple pain now/pain later tradeoff and then choose accordingly. But I don’t think this is actually the case. What’s not easy to appreciate is that the pain later is usually much, much worse than the pain now.

Why is this the case? It comes down to tech debt. Choosing to create an internal fork of an open-source project is like taking out a massive loan: you get some time right now, but with every upstream patch you let go unmerged, you are multiplying the amount of effort you will ultimately need to get back in sync. And to make matters worse, people have a tendency to apply custom patches to their internal forks to get the features they need up and running quickly. This probably seems like a great idea at the time, but when it’s done carelessly, you can quickly get into a state where your systems depend on a feature that’s never going to make it into the upstream and might actually conflict with what the community decided to do.

When you get into the situation where your fork has diverged so much that you find yourself thinking, “I’ll never be able to switch to upstream,” then you’ve reached a state of tech bankruptcy – literally the only thing you can do is give up and stick with what you have or commit to an unbelievably expensive restructuring. At this point you cease to have a piece of open-source software: you have no external community, nobody outside to add features, fix bugs, and review your code, and you can lose compatibility with externals systems and tools.

Needless to say, the decision to make an internal fork should not be undertaken lightly. Weigh the perceived stability and flexibility benefits very carefully before starting down that road. If you must fork, make sure you understand the costs up front so that you can budget time to keep your fork in sync.

There’s a flip side to this. How often does a piece of internal code that “could be” an open source project go from closed to open? I know from my experience that it’s not easy to make the transition – you end up building in a feature that’s too domain-specific, or you tie it to your internal deploy system. I think that writing a decent piece of software that could be spun out as an open-source project and yet failing to do so is another case of accumulating tech debt. In this case, the bankruptcy state is a project that could have been open but never will be because of the time investment required.

The prescription in this case is easy: open source your project early, perhaps even before it’s “done,” continue to develop it in the open, and whatever you do, use the version you open sourced, not an internal fork.

Custom languages considered harmful

I use and contribute to a lot of open-source software projects, and one thing I see all over the place that drives me absolutely nuts is the prevalence of custom languages. In serialization frameworks, there are interface definition languages. In CAD tools, there are modeling languages. In the NoSQL and big data processing world, you’ll find my personal pet peeve, query languages.

Why are people making these custom languages? Abstractly, I get what they’re thinking: they have an application that would benefit from an expressive, terse, awesome set keywords and operators that will allow their users to be more productive. And this thinking isn’t fundamentally wrong. The productivity gain is real and definitely worth pursuing, and an intuitive language can go a long way towards making your product extremely accessible.

But here’s where I think these people go off the rails: aside from the very real NIH danger (Making your own language is cool, right? Guys?), there is a huge difference between a custom language – one in which you have to do the whole compiler contortion of lexing and grammars and whatnot – and a domain-specific language, which is more typically a set of functions and syntax that work from within another existing language to provide enhanced functionality. These two things seem a lot alike at first glance, but in my opinion, they couldn’t be more different. Why? Because when you decide to write a custom language – however simple it might seem to you right now – you are making the incredibly short-sighted statement that you know how to design a language better than all the people who have tackled this gargantuan problem before you. I don’t doubt that you know your domain better than anyone else, and that’s definitely what you need to get started, but what you probably don’t know is what you’re going to need later that will totally change your language. Or how best to implement recursion, scoped variables, memory management, or exception handling. The list could go on forever.

Let’s look at a concrete example. Lots of people in the 3D printing/maker community use OpenSCAD, an open-source 3D solid modeler with a decidedly programmer-tailored interface. Makers love it, because it’s powerful, fairly simple, and gives you really repeatable results. It uses a totally custom scripting language that is basically C-like: nesting, scope, function calls. But in combing through some of the syntax and reading the mailing list, I get the impression that a nontrivial amount of the developers’ time is spent reinventing features like variable assignment and for loops that could instead be spent actually developing the application itself.

To be totally clear, I love OpenSCAD, and I’m going to keep using it. That doesn’t mean that I don’t wish the developers had decided to write a DSL instead. If they had decided to embed their functions in a nice Ruby DSL, instead of having to figure out how to deal with the quixotic features of the native dynamic length lists, I could leverage all of the existing resources – community-vetted documentation, libraries, debugging and editing tools – and it would just work. As a user, I could just focus on learning the portion of the new syntax that applies to the specific domain, rather than also having to scramble to try and figure out how to do the things I already know how to do in other languages.

It’s clear that some folks out there get this right and some get it wrong. Thrift and Protobuf both have their own IDLs, which work well enough, though the code involved in analyzing IDL files is decidedly non-trivial. OpenSCAD and others like RapCAD have plowed tons of time on their own custom languages. Map/Reduce productivity tool Pig is a nightmare: since it’s not Turing complete, whenever you want to do something not imagined by the developers you have to make the significant development context switch into regular Java to make user-defined functions. Conversely, Cascading is a wonderfully usable and extensible pure-Java DSL for composing complex Map/Reduce workflows with arbitrary UDFs that don’t require you to context switch. (Even the splendid Clojure wrapper for Cascading, Cascalog, has the sense to use the existing Clojure language to give you an honest-to-god Turing complete language to work in.) Cassandra has CQL (which I admittedly haven’t used) while HBase has stuck with a solid functional API and has a JRuby REPL in which you can use it.

So next time you’re thinking about dusting off your context free grammar skills and busting out your favorite compiler compiler, just don’t do it. There’s an easier way.