Thursday, April 11, 2013

CloudSpokes April 2013 Roadmap

April is well underway and not only is it income tax time here in the US but our April sprint is in full swing. We've got a number of new features planned for this month and we've launched challenges to build out this functionality for both members and sponsors.

Before we talk about what's coming up in April, let's take a step back and see what we actually accomplished in March. Besides everyday fixes and maintenance, we has a number of large projects in the sprint. We did pretty well and delivered on most of the backlog. Some items were fully completed while other need a little more work.

  • New WordPress blog - we actually decided not to move to WordPress due to issues with comments, SEO and subscribing. Instead we reskinned our current Blogger site and plan to make some more aesthetic changes in the coming weeks to improve the look and feel.
  • New Submission Process - this was a little tougher than expected and we are still working on it. It's included in this month's sprint.
  • Advanced Challenge Search - this turned out pretty well and we are quite pleased with the UI and functionality. There are a few tweaks we need to make to redis but we are working on that. Since redis excels at set comparisons it doesn't perform full text searches. We are looking at adding Elasticsearch better results.
  • Challenge Prediction API - we have the API up and running but it needs some (planned) enhancements.
  • Challenge Admin UI - this new functionality was deployed but is only available to sponsors so must members cannot see it. We are still tweaking the UI to make it easier to create and maintain running challenges.
  • Community Ombudsman - the program is up and running with Kenji776 as our first ombudsman. See this blog post for more info.

The April sprint contains some new functionality plus some enhancements:

  • Massive overhaul to "task" or "first to finish" challenges. The messaging and scoring of these tasks are somewhat confusing and we want to make them much clearer and easier to compete in. For tasks, we judge the submissions in the order in which we received them. If the first submission passes review the participant will be declared the winner. If the submission does not pass review, we will judge the next person's code and so on, until we either declare a winning submission or exhaust all submissions.
  • Enhancements to the Challenge Prediction API but automating the process of feeding the Google Prediction API data member and challenge data.
  • Final touches to the submission process to include an enhanced UI, hosted video recording with screenr.com, code analysis with Thurgood (stay tuned for details) and logging with Papertrail.
  • Enhanced and targeted scorecards with Madison.
  • Appeals process for challenge submissions. Our challenge has wrapped up and we are now evaluating the code and adding additional functionality.
So those are the major items in April's sprint. Thanks for the submissions to build this functionality!

No comments:

Post a Comment