Day 1 ended… late. We finished at midnight after the Atlassian guys threw a party with free beer, foosball and a lot of socialising with fellow geeks. Even though it seems like the whole BBC dev team are here they are great guys and we shared a lot of ideas.
We also got an amusing talk from Damian Conway
From Mike Salsbury:
Parallel Keynote: 8 lines of code – Greg Young
This was a talk about Simplicity and Magic. Frameworks contain magic, and IoC is like magic. The problem is that the more magic there is in your code the harder it is for anyone new to ramp up and be able to contribute. So you are only able to hire people who already know how to do magic.
It is much easier (and more useful) to explain Composition to a junior than the magic of Dynamic Proxies. Along the way we also considered whether single method interfaces shouldn’t be interfaces but maybe a function. There was also some examples of using lambdas. Also how Factory can be an anti pattern as well as a pattern. And the partial application pattern.
Overall the take home message was the same one I got from an AI professor. Don’t make it harder than it is. Do you really need a framework? Or is it just easier to mask the real problem if you use a framework. If the solution to your problem doesn’t require a framework, don’t use one. Keep it Simple. A similar view to one echoed in a later talk by Dan North.
Introducing the BBC’s Linked Data Platform and APIs – David Rogers
We’d played table football with a couple of BBC guys the night before, so I had some big expectations for this talk and I wasn’t let down. There was World Cup 2010, Olympics 2012, all towards creating a Platform with semantic roots, that might be available as an open API sometime in the near future. We got a full overview of the development and thinking behind the API, and where they’d like to take it next.
There was Scala, the triple store graph database and lots about linked data. The database is full of Subject, Predicate, Object triads, and no tables or rows. You can access it with SparQL construct graphs, and these queries can be represented as WebService endpoints (I think). But allowing that in an open API would probably kill performance, so they need to think of a different way of exposing things.
There will be an open hack day on this, and I’ll be signing up to see in detail how it all works. I’d like to see more of semantic navigation using the Tripod API and graffiti tagging. All allowing you to expose things via microformats and RDFa.
Modern webapps with ember.js, node.js and Windows Azure – Tim Park
I was interested to see what ember was like under the hood, and what they would do with node which I’ve used before. And the whole lot was then linked up on the Windows Azure PaaS.
Ultimately it was all deployed with Windows Azure which could link directly into your source control system (git was the example used), and thus nodded in the direction of Continuous Deployment.
This was after a brief history of static, AJAX and dynamic web paradigms.
Architecting PhoneGap Applications – Christophe Coenraets
A great set of principles for creating a phonegap application in a way that it can run on the native device and also as a web app. Because basically a phonegap app is a web app.
Accelerating Agile: hyper-performing without the hype – Dan North
I’d also been looking forward to seeing Dan North speak since I knew I was going to QCon. I was slightly late and had to search for a seat there were so many others with the same idea.
7% code coverage was a shock, but that was the overall amount. And apparently in the critical areas it was 150%. But the talk was all about opportunity cost, and investing where the best return can be found. And if that means looking for the dragons instead of doing the obvious stuff, then you are de-risking effectively.
Preferring simple over easy echoed the Keynote earlier in the day. It’s probably easy to use a whole framework, but if all you need is simple HTTP then maybe there is a much smaller and smarter solution around. Maybe we shouldn’t have replaced our simple talking to port 25 solution with JavaMail all those years ago?
From Richard Chamberlain:
People over process – Glen Ford
Glen Ford talking about experiences from moving from, as he puts it, an asshole developer to a mature agile manager. Some nuggets here:
Good people don’t necessarily make a good team. You need quality interactions between them and a vision.
If you’re stuck in an architectural stand-off between two devs, get each of them to present the other person’s point of view to promote considering alternatives. he admitted this backfired once when both people changed their minds and thought the other person’s design was best.
Climbing out of a crisis at the BBC – Katherine Kirk
This was a standard – “we did some agile stuff and it made us better”. However the twist was that they didn’t follow agile process and we got an experience report from a dev working on that team.
Agile and lean are principles, not methods – don’t follow Scrum or Kanban to the letter – use your brain to do the right thing for your team.
To turn things around in an under performing team: Under promise, over deliver.
Boards / Meetings / Tracking systems don’t make the team. THey are there to get people collaborating efficiently
Only if you communicate openly and truthfully will you be able to run an efficient team
Collaborate, but don’t over-collaborate. People need to get work done.
High Performance Messaging for Web-Based Trading Systems – Frank Greco
How NOT to measure latency – Gil Tene
A fantastically comprehensive talk by Gil, mirroring our philosophy at Caplin.
When measuring latency, don’t measure the average and standard deviation. He showed that a dataset with latency spikes – or “hiccups” as he put them get smoothed by average and standard deviation. All systems have hiccups, whether it’s garbage collection, database re-indexing, resizing memory allocations. They are all things you have to pay for on a regular basis and they introduce Latency. You need to measure max latency and percentiles. With all this data you can tell if there are hiccups.
Another thing missed in latency testing is “co-ordinated omission”, where previous requests take longer and the test clients wait for the previous request to complete before requesting again. This creates a smaller, less accurate dataset.
In reality, if you’re latency testing you should try a test where you create a hiccup by pausing the machine and see if your results can pick it up.
He also showed jHiccup – http://www.azulsystems.com/jHiccup. A tool that adds a thread to a running JVM, sleeps for a millisecond, wakes up and measures if it actually was a millisecond since it slept. If it was longer, there will have been a hiccup and we can now measure that.
I’m happy to report that Liberator performance testing tests for average, percentiles and max. Being written in C helps it avoid large hiccups and provide a low max latency.