We knew that the basic Heroku scheduler was sufficient for just kicking a process off every day at a specified time. But, the Heroku scheduler causes a temporary dyno to spin up and it's not intended for heavy lifting. The scheduler documentation states: "Anything that takes longer than a couple of minutes to complete should use a worker dyno to run." Also, inexplicably, according to project architect Kyle Bowerman, “Any fish that doesn’t swim faster than the current will get eaten by the bear.”
We needed to introduce a simple work queue to the mix to help the scheduler communicate cross-dyno to a 'worker'. This allows the scheduled service to remain extremely lightweight. When the scheduler fires up the job, it basically just throws a message on the queue and dies again; letting the dyno (and therefore the $$ meter) shut back down.
Our solution uses RabbitMQ. There's at least a couple of Heroku add-ons for it, it’s easy to spin up a local dev server, and there's a working node.js client out there, so this was a good place to start. Swapping out another message queue would, of course, be a fairly easy process. Kue seems like it would be an interesting option to explore, especially if one already had Redis in their stack.
A Little About the ChallengesWe ran an initial challenge, Simple Batch Processor with Scheduler, Queue and node.js (wear test), to get the basic functionality wired up. We got three solid submissions all of which were fine by themselves. But we noticed there were some really interesting bits in each submission, so we launched a "frankenstein the code" challenge. This is a new style of challenge we're testing out: all valid submitters are invited to a short, private, follow-up challenge. The goal is to merge the code, taking the best of each submission to make a super-version.
After that, we ran one more challenge, Work Queue enhancements for Wear Test including generic API CRUD event handler, to make a few key improvements such as: allowing the web dyno to add things to the work queue, add a generic "api event" job to the queue upon any successful CRUD operation (for trigger-like functionality), and allowing a single Heroku worker dyno to “listen” to all the queues. A note on that last point: down the road we might want to scale the workers at a more granular level, but for now we expect the load to be light enough that a single worker can cost-effectively handle it all.
Fun fact: There was one member in particular who competed in the first challenge but did not place high enough to get a prize. That still earned him a spot in the frankenstein challenge… and he went on to CRUSH both of the following challenges! Nice work aproxacs!
Want to see the code? We asked the project Architect, Kyle Bowerman, how do that, and here's his reply: "Register for the next Wear Test API challenge and you'll get access!"