Wow, so Day 2 of Velocity has just finished and I can’t help but feel that I went to all the wrong talks. This is meant as a compliment, I can’t believe the amount of talent going on at this conference and it is physically painful having to choose which talks to go to. And that is not to mention all the people I am trying to talk to in between, and somehow finding time to eat, write notes, follow the twitter stream and pick up freebies.
We started the day with a set of great keynotes from a wide variety of cool people. Did you know that the whole of Facebook is basically just one giant (1.2Gb) binary. And how do they deploy it to all their servers? Bittorrent. And no one looks below the fold on a google search do they? Top three links or I search for something else. Evidence shows: Yes, that is what western users do, but asian users peruse the entire page looking for the most relevant information (link), UX developers pay attention if you want to penetrate China.
To go through all the fascinating things I learnt at the opening plenary would fill this entire blog, and I hadn’t even got to the first session of the day!
First Annual State of the Union for Mobile Ecommerce Performance
Slides (via formwall)
So yeah. If you want detail, then you want to go to one of @joshuabixby‘s talks. This guy breaks it down. I was actually warned twice about this, firstly by @souders and then on the first slide of the presentation. And I was not dissapointed. There was more information about every step of the mobile communication chain than I ever though I wanted to know, and that is one of those things that I think we just don’t get enough of.
While XKCD has warned of the dangers of depth first information gathering, I often think that perhaps we all just don’t know enough about the way that everything fits together at a deep enough level. There was plenty of information about how this works, along every step and how they can be improved.
He also brought up the point that the only way he has found of truly and accurately testing relative performance (as in, have the changes we developed made a difference), is to do it manually. In his case, with an army of interns performing tests in microscopically controlled conditions. When questioned about the cost of hiring interns to do this, he admits that it is “brutally expensive”, but that it was the only way to get the kind of testing that he wanted. Automation just does not cut it on mobile just yet.
While all the “data” of the talk was interesting, the thought that really stuck with me was whether we truly do our testing in a “scientifically rigorous” way. I mean, if I were to file a paper on “my applications performance got better”, would I be happy to stand by my results and observations? Would I expect another developer to reproduce the experiment under the same conditions and get the same results? Should I hold myself to that standard? Questions for another day.
During lunch I was able to have a quick chat with some of the CCP Developers, makers of Eve Online. I asked why they weren’t doing a talk on what they learnt from their work on Time Dilation. I think this is a very interesting topic, as it is in direct opposition to what everyone else does in this situation, QoS.
They said that actually Time Dilation, despite it’s cool name, wasn’t really that interesting. The servers were basically slowing down anyway, so all they did was put a little bit of predictability on top of it. In a sense they simple “Productised Lag”. While this is a good point, I still think this would be an interesting talk.
I was also able to catch a conversation with some of runtastic (capital letters not cool enough for you startups?). I am quite a keen runner and have been looking for a good running application, and I told them about some of the problems (we all had a good laugh at this one, that is nearly 10 times faster than the ISS by the way, don’t developers do boundary checking anymore) that I had been having, and they provided me a pro runtastic account. I have already tried it out and it seems pretty cool!
The BBC’s experience of the London 2012 Olympics
After lunch was the death slot. I really wanted to go to all the talks, but I had to pick one, and ended up going to the talk where @b3cft took us through how the BBC tested their service prior to the London Olymipcs. Short answer: A lot.
There wasn’t really anything particularly special that the did, they simply went through their entire codebase methodically finding the points of failure and fixing them and moving onto the next one. They consistently tested the entire system using third party tools to simulate the massive amounts of users they expected and found what the problems were.
Before testing even began they doubled up all their physical infrastructure, and then went through finding all the software issues. They kept testing right up until the bitter end, and finally, just one week before the Opening Ceromony, they were happy with the service. 7 days is an eternity in developer time.
Beyond Waterfalls: Visualizing and Understanding Resource Dependencies on Web Pages
No Slides 🙁
Have to say, I didn’t care much for this talk. Qualcomm showed off some kinda cool visualizations for understanding resource dependencies on web pages, but the tool that they used is not yet available to anyone. So, this was basically useless to me. I signed up for the beta while I was there, but I want my candy and I want it now! Watch this space, but please make sure there is something for me to use before spending 40 minutes (which I could have spent in 2 other awesome talks) showing me a tool that only your developers use in house.
I was able to catch with @guypod about his talk yesterday.
One of the interesting things that I noticed was that while Guy warned of the dangers of consolidating all your files together, quite a few of the other speakers today recommended that I do so.
I asked him what his opinion on this was, and he said it basically comes down to how far along the performance road you are. People who are loading 15 different JS files asynchronously would do well to load them all in one single file, due to the way that browsers can only load 6 files at a time and there is a connection cost.
However, once you have got to the single file stage, you will notice that you can get further gains by splitting the file into a “Render” and “Logic” channel. But this is a two step process.
I also asked him about the complexity of splitting JS files when you use Knockout. He didn’t have any magic solution but advised that I have a deep look at our dependencies and defer business logic whenever possible. So if a user clicks a button, queue it up and execute it as soon as the logic has loaded. Once I explained that buttons were used to trade on prices, he realized that this might not be such a good idea, depending on the latency. Still, I think we have a lot of gains to be made with this method.
Make Your Mobile Web Apps Fly
So yet another talk about making your sites fast on mobile devices. I guess at this point I have seen everything there is to know on this topic as it goes over much of the same ground as the others. That said, @sw12 did bring up some really great points.
Firstly, make sure that you use the prefetch and prerender headers when possible. They offer some really positive experiences for your customers. And make sure that you set the content type headers and meta tags. Don’t make the browser try and guess what it is about to interpret, declare UTF-8 if that is what you are using.
Also, be aware that there are a lot of really cheap budget tablets being produced in Asia at the moment, and a huge number of users are going to be using them . You cannot simply test on iPad 2 and think you have a baseline for worst case tablet performance (which is what we do). Many users in China are using tablets worse than the iPad 1.
And please be sure to return desktop to tablet. Many sites simply check whether a user is using an “Android” browser and then return their mobile application. Don’t do this. Users expect the desktop experience on a tablet, and the mobile site looks weird.
Also, on mobile, many users are using their thumbs, make those buttons larger! And for goodness sake, put the interaction buttons at the bottom.
Actually, this is a great point for me to go on a rant. The Palm Pre was a perfectly average smartphone which sadly just launched in the wrong time, wrong place. But the one thing that they did correctly was that they moved all their interaction to the bottom of the screen so that almost all interaction could take place with the users thumb. There is nothing worse that trying to hit that back button in the top left of an iPhone screen when you are right handed. Stick that button on the bottom where I can reach it!
This was probably one of the most useful talks to me on Day 2 of Velocity. @rem provided a great talk on how to track, monitor performance and test software on mobile. At first I thought this was impossible, now I know that it is simply really difficult.
Firstly, there is a lot of emphasis of setting up testing labs in your neighbourhood, useful if you have a big development ecosystem in your neighbourhood, and if not, you can set one up for yourself relatively cheaply.
There was so much for me to digest in this talk, that I don’t even want to get into it at this point, so I recommend that you look at the products mentioned in the slides and hopefully I can make a more fully fledged blog sometime in the near future.
Spy v Spy – Treachery in the Dev/Ops Trenches
Okay, here is a hint to all if you who are thinking about presenting a talk. Geeks are funny people. If you come to Caplin on any day (as long as it’s not release day) you will hear laughter. But whatever you do don’t try and be funny. And most certainly don’t promise it in your rubric.
I think that their point about considering the whole stack when developing was really insightful . There really are so many different things that can affect performance and functionality that are so far out of a developers hands we really need to engage our brains and consider why it works “on my machine” but not in deployment.
I found it really interesting how different parts of the their caching architecture worked against each other when in production. This is something that you can’t really predict unless you have an in depth knowledge of your entire stack, which no one has.
And that was it. Can’t wait until tomorrow!