This is the story of a really great project I did while working for Salesforce.org. I have done a significant amount of remote pair programming over the last ten years, but this project was extraordinary in a number of ways. For one thing, it was a really complicated problem that demanded a technically advanced solution. For another thing, it took almost an entire year to finish-- one hour per week.
The ProblemI will try to give you the background in a way such that your eyes don't glaze over: in order to work with data in a Salesforce instance via the API, you address "Objects" and "Fields". (These are actually tables in a database that may be addressed by a poor and crippled version of SQL.) For example, here is a description of the Account object whose first field is AccountNumber.
If you are a developer on the Salesforce platform, you can add your own Field to the Account object, but you have to append '__c' to it, like "MyField__c". You can also create your own Object, with the similar convention "MyObject__c". But when you get really serious about developing on Salesforce, you also have to create a "namespace" as a unique identifier for your "package", which would then be "foo_MyObject__c" and thus also "foo_MyField__c". (Sorry about that, I'm done now, I will spare you any more tedious detail.)
My tests had to set up and tear down data via the Salesforce API. There is a low-level Ruby client for the Salesforce API, but it demands the literal names of the Objects and Fields. My problem was that at runtime, the tests had no way to know the state of namespaces of the custom Objects and Fields in the target environment. Any of these conditions could be true, or false:
- Custom objects with no namespace
- Custom fields with no namespace
- Custom objects and fields with arbitrary namespace
- Custom objects and fields with multiple namespaces
- Custom fields with multiple arbitrary namespaces on objects with arbitrary namespaces or no namespaces
A Little Help From My FriendsI do not consider myself a particularly good programmer. As a QA person, I have written only a fraction of the amount of code that a line programmer has. I have, however, read (and debugged!) an enormous amount of code over the last twenty years, such that I can tell a good solution from a poor solution, a good programming idea from a bad programming idea.
All of my ideas to solve my test data problem were bad. I needed help from someone who was a better Ruby programmer than me, and who also knew Salesforce better than me.
My colleagues introduced me to Kevin Poorman. I explained what I was trying to do, and Kevin graciously agreed to help me-- for one hour per week.
So every Thursday afternoon Kevin and I would join a teleconference session. It took almost a year to solve my test data API problem completely. This was my first experience with metaprogramming, and it produced my first Ruby gem, SFDO-API.
One Bite at a Time...is of course the punch line to the old joke "How do you eat an elephant?" but it is also a good approach to tackling big projects in short sessions. The big problem was finding out at run time what namespaces, if any, were on the particular fields and objects we needed to work with. Then we needed to be able to
- create instances of Objects with Fields with those namespaces
- update instances of Objects with Fields with those namespaces
- query the target environment using those namespaces
- delete instances of Objects with the proper namespaces
The good way to handle this sort of situation is to use metaprogramming, particularly Ruby's "method_missing" feature. Ultimately we had methods for delete_all_MyObject, create_MyObject, and a method "select_api" that would parse the namespaces inside the select() query. (It turned out that we got the 'update_api' method for free without needing method_missing to generate it.) If you want to know the details about how SFDO-API works in practice, the README is pretty good.
These open-ended methods inside of method_missing we created one by one, Objects first and Fields when we had Objects working, one hour per week at time.
The Weekly RoutineBesides solving my test data problem, I also wanted to learn how metaprogramming in Ruby actually works, because it was something I had read about but that I had never done in practice. I find that it is almost always the case that on a remote pair programming session, one person is teaching and one person is learning. I always have the learner do the typing, while the teacher does the navigating. Getting that stuff under your fingers as you learn it is critically important.
Kevin and I quickly fell into a routine. We never knew where we would end up at the end of any given session, but every week before the session started I would review the current state of our code and isolate the next logical problem to tackle. I tried really hard to isolate bits of the next problem small enough to be solved in an hour. I was usually successful. As I said, I have read an extraordinary amount of code for a QA person, so I could visualize the next logical step to tackle, even if I had no idea how we would tackle it. So every Thursday when Kevin joined our meeting, we would immediately dive into the next conceptual challenge.
And after the hour was up, I would spend some time tidying the code. Sometimes I would add comments to remind future us of where we'd stopped in the previous session. As I got better and began to master some of these new concepts, I would sometimes right away fix bugs we had left behind. Then I would put away the code until the next week's session. I mentioned from time to time that "Kevin wrote the good parts, the rest is mine" but actually I did get a whole lot better as time went on.
The Big PictureOur first commit was July 2016, and my last bug fix was May 2017, so a little less than a year in total. Maybe forty or so Thursday afternoon sessions. About forty hours of Kevin's time, and maybe a little more than double that of my time, because I spent time each week setting up the session problems beforehand and tidying up afterward.
Sometimes I wonder if maybe it would have been better just to knock out the whole project in a single push over a couple of weeks instead of over a year, but had we done it that way I am sure I would never have been able to understand the code as deeply as I do having lived with it and studied it and watched it grow week after week
Since I fixed the last bug in May 2017 this code has been called thousands of times for test data from at least four repositories, and as far as I can tell has not failed. I am really proud of this.