Making History with Code Festival 2014

What do you get when you combine a coding contest, a music and arts festival, and games? Do those things even belong together?


For two days in November 2014, two hundred collegiate coders came together in Tokyo to participate in Code Festival 2014, put on by Recruit and Indeed Tokyo. The event included five distinct coding challenges, as well as fun non-coding activities. After the initial coding challenge – the main round – participants could brush up on their skills with tutoring tailored to that challenge. Not your average coding contest.

So, what does a room full of 200 coders look like?

Code Festival 2014

Why make history?

Organizers wanted to capitalize on love of coding and competition and bring lots of talented coders together to work hard, have fun, make friends, and learn something new. Traditionally, programming contests are limited to a few top competitors, which can discourage those who don’t make the cut.

Ayaka Matsuo, the event’s project lead, decided to break free from that tradition. The structure of the festival allowed many more participants to take advantage of the events. Another history-making facet of the event was Matsuo’s event team: 16 new college hires who will join Indeed Tokyo after graduation in April 2015. They provided ideas, helped run the event, and generated a lot of enthusiasm.

Expanding on a tradition

Indeed and Recruit held coding duels in Fall 2013, December 2013, and February 2014. (Read about them here.) They were a warm-up to the November 2014 festival in Tokyo, providing a lot of valuable insights into how to expand the types of coding challenges. Code Festival 2014 was so successful that plans are already forming for the next event. Could we host even more competitors?

The coding challenges

The event included 5 separate coding challenges. Two of the challenges – the main round and the morning programming contest on the second day – were standard programming contests, but the remaining three were not at all traditional.

Main round

All 200 participants worked through 10 questions (including debugging) in 3 hours.

Code Festival 2014 main round

Participants during the main round portion of the contest

The participant who solved the most problems in the least amount of time won the round. The top five participants from the main round advanced to the exhibition challenge.

Code Festival 2014 top five contestants

Top five winners with Indeed’s Senior VP of Engineering, Doug Gray

The winner took home 294,409 yen, an amount that resembles a prime number. Last year’s Tokyo coding duel awarded prize amounts that were prime numbers. This year, to mix it up, the organizers chose amounts that are strong pseudo-primes. In Japanese, these numbers are called Kyogi sosu (強擬素数) — a clever choice, since “Kyogi” can also mean strongly doubted or competition. Check out the prize amounts for the top 20 contestants here.


In the evening of the first day, the top five finalists from the main round moved to a separate room for the exhibition challenge.

Code Festival 2014 exhibition round

Participants were filmed during the exhibition challenge.

This room was far from private, however, as live video from each of the five computers was streamed into the main hall, allowing everyone to follow along with the competitors’ progress in solving the problem. Audio commentary added to the excitement.

Code Festival 2014 exhibition round onlookers

Onlookers during the exhibition challenge

Were the challengers aware that their every move was being evaluated in the next room? Yes! And being watched only made the competition more lively.

Morning programming contest

All 200 participants were invited to return the next morning for another programming contest. To change it up, participants joined one of three groups, determined by skill level, and competed individually against others in the same group.

AI challenge

The AI programming contest required participants to write code that manipulates virtual players in a computer game. Fifty participants who had registered in advance of the festival participated in a preliminary challenge, with the top 16 progressing to the final. Those 16 were divided into four groups of 4, competing tournament style.

An exhibition match followed with Naohiro Takahashi (President of AtCoder and a competition programmer) and Colun (a competition programmer) and the first- and second-place winners of the AI challenge.

Team relay

During the last challenge on day 2, the 200 participants were divided into 20 teams of 10 members each. Each team needed to solve 10 questions, one at a time, within 1 hour 30 minutes. Live video aired, along with commentator play-by-play.

While the participant solved the problem, the rest of the team huddled apart from the contest area. If the teammate with the “baton” had a question, s/he stepped away from the computer to collaborate with the other teammates.

Code Festival 2014 team relay

Team huddle during the relay

Other festival activities

Event organizers sought to ensure that all participants had a chance to learn, play, and connect with their peers. Non-coding activities included calligraphy with code-related content, board games, Taiko-no Tatsujin (drum masters), and DDR (Dance Dance Revolution).

Code Festival 2014 calligraphy

Calligraphy coding

Participants also had the opportunity to take private lessons with coding competition experts and attend panels with industry professionals covering these topics:

  • The future of programming contests
  • A question: Is the coding competition effective for learning programming?
  • How to create redcoder
  • How to handle increasing speed in coding competitions

Want to know more?

Gizmodo Japan wrote more about Code Festival here and here. To review the participants’ submissions, navigate to the AtCoder standings page and click the magnifying glass beside each user’s name. To brush up on your own skills, participate in Top Coder and challenge yourself with past problems from the ACM-ICPC World Finals.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Share on RedditEmail this to someone

@IndeedEng 2014 Year In Review

We help people get jobs. That’s our mission at Indeed. And being the #1 job site worldwide challenges each of our engineers to deliver the best job seeker experience at scale. When we launched this blog in 2012, we set out to contribute to the broader software community by sharing what we’ve learned from these challenges.

Response to our @IndeedEng tech talks continues to be strong, with over 700 people attending the series in 2014. And our engineering blog received 59,000 views from 128 countries.

Here’s a brief recap of content we shared in 2014:

Testing and measuring everything is central to our development process at Indeed. We use Proctor, our A/B testing framework, to accomplish this, and we open sourced the framework in 2013. Building on this in 2014, we described Proctor’s features and tools and then wrote about how we integrate Proctor into our development process at Indeed. We also released proctor-pipet, a Java web application that allows you to deploy Proctor as a remote service.

We open sourced util-urlparsing, a Java library we created to parse URL query strings without unnecessary intermediate object creation.

In the first half of 2014, we held several tech talks devoted to Imhotep, our interactive data analytics platform. Imhotep powers data-driven decision making at Indeed and we were excited to talk about the technology: scaling decision trees, building large-scale analytics tools, and using Imhotep to focus on key metrics. Then in November, we open sourced part of the platform and held a tech talk and workshop for attendees to explore their own data in Imhotep.

Being a global company is a challenge we take seriously. We shared our experience of iteratively expanding to new markets and how international success requires solving a diverse set of technical challenges.

Changes coming to the blog and talks pages include translating content into Japanese. Look for more translated posts in the months to come.

We’d like to thank everyone who helped make these accomplishments possible. If you follow the blog and watch our @IndeedEng talks, thank you for your support! We look forward to continuing the conversation in 2015.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Share on RedditEmail this to someone

Why I Unit Test

If you’ve done any software development in the last fifteen years, you’ve heard people harping on the importance of unit testing. Your manager might have come to you and said “Unit tests are great! They document the code, reduce the risk of adding bugs, and reduce the cost and risk of making changes so we don’t slow down over time! With good unit tests, we can increase overall delivery velocity!” Those are all great reasons to unit test, but they are all fundamentally management reasons. I agree with them, but they don’t go to the core of why I, as a developer, unit test.

The reason I unit test is simple: Unit testing is both an opportunity and a strong incentive to improve new and existing designs, and to improve my skills as a designer of software. The trick is to write as few unit tests as possible and ensure that each test is very simple.

How does that work? It works because writing simple unit tests is intrinsically boring, and the worse your code is, the more difficult and boring it will be to test. The only way to get any traction with unit testing is to drastically improve your implementation to the point where it can be covered with hardly any unit tests at all, and then write those.

Avoiding unit tests by improving your implementation

Here are some approaches for writing fewer unit tests:

  • Refactor out repeated code. Each block of code that you are able to abstract out is one less unit test to write.
  • Delete dead code. You don’t have to write unit tests for code that you can delete instead. If you think this is obvious, then you haven’t seen many large legacy code bases.
  • Externalize framework boilerplate as configuration or annotation. That way, you only have to write unit tests for product logic rather than scaffolding.
  • Every branch of code needs at least one unit test, so every if statement or loop you can remove is one less test to write. Depending on your implementation language, if statements and loops can be removed by subtype polymorphism, code motion, pluggable strategies, aspects, decorators, higher order combinators or a dozen other techniques. Each branch point in your code is both a weakness and a requirement for additional testing. Remove them if at all possible.
  • Identify deeper data-flow patterns and abstract them. Often pieces of code that don’t look similar can be made similar by pulling out some incidental computations. Once you’ve done that, then underlying structures can be merged. That way, more and more of your code becomes trivially testable branch-free computations. In the limit, you end up with a bunch of simple semantic routines (often predicates or simple data transformations) strung together with a double handful of reusable control patterns.
  • Separate out your business logic, persistence, and inter-process communications as much as possible, and you can avoid a bunch of tedious mucking with mock objects. Mock objects are code smells, and overuse of them may indicate that your code has become overly coupled.
  • Figure out how to generalize your logic so that your edge cases are covered by your main flow, and single tests can cover diverse and complex inputs. Too often we write single-purpose code for special cases, when we could instead search for more general solutions that cover those cases without special handling. Note however, that discovering the simpler, more general solutions is often much more difficult than creating a bunch of special cases. You may not have enough time to write small amounts of simple code, and instead have to write large amounts of complex code.
  • Recognize and replace logic that is already implemented as methods in existing libraries, and you can push the trouble of unit testing off onto the library’s author.
  • If you can simplify your data objects so much that they are immutable and their operations follow simple algebraic laws, you can utilize property-based testing, where your unit tests literally write themselves.

But yammering is cheap, let’s see some code!

Finding deep patterns and abstracting out repeated code

A common pattern in data-science code is to look to find the element of some collection for which some function is optimized. The simplest Java code for this might resemble the following:

  double bestValue = Double.MIN_VALUE;
  Job bestJob = null;
  for (Job job : jobs) {
    if (score(job) > bestValue) {
      bestJob = job;
  return bestJob;

This is quick enough to code that you might write it without even thinking about it. Just a loop and an if! What can go wrong? That’s fine the first few times you write it, but you’re building up technical debt every time. Writing unit tests is where the repetition and risk starts to really show up. Every block of code like this will need tests not just for correctness in the common case, but also for a bunch of edge cases: what happens if we passed in an empty collection? a single element collection? null? Even the simple code above has some bugs that unit tests can find, but you have to write a lot of them every time you wish to do an optimization, and I don’t know about you, but frankly I’ve got more useful things to do with my time.

A better solution is to realize that even this small amount of code repetition can and should be abstracted out, coded and tested only once. It also gives us a chance to genericize the code and fix some edge cases.

    public static <J> J argMax(Iterable<J> collection,
                               Function<J, Double> score) {
      double bestValue = Double.MIN_VALUE;
      J bestElement = null;
      if (collection != null) {
        for (J element : collection) {
          if (score.apply(element) > bestValue) {
            bestElement = element;
      return bestElement;

This code needs to be unit tested only once. For an even better solution, we can replace all of this logic with a library call (in this case from Google’s Guava library):

  public static <J> J argMax(Iterable<J> collection,
                             Function<J, Double> score) {
    return Ordering.natural().onResultOf(score).max(collection);

After that, you only need unit tests for each different scoring function you use. Everything else has already been handled.

Avoiding unit tests: a path to understanding great software design

The thing about all of these unit-test avoidance techniques is that they are essential to the process of creating robust and supple designs even if you weren’t going to do any unit testing at all! Too often, in our rush to simply get something working, we don’t follow these techniques, but continual unit testing gives us a time and a reason to do it right. In this way, you can leverage aggressive laziness in implementing unit tests to drive continuous improvement of your project design and implementation.

At least, it can if you let it. If you spend your unit testing time writing unit tests for your code without improving its underlying design, you’ll most likely never learn anything, and you’ll have little reason to create code with quality better than “it mostly works.” If you spend your unit testing time looking to minimize the total amount of testing code that you write (by improving your product code), you’ll quickly learn just what it means for software to be well-designed. I don’t know about you, but that’s why I love programming in the first place.

Dave Griffith is a software engineer at Indeed and has been building software systems for over 20 years.

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+Share on RedditEmail this to someone