ManageFlitter Performance Upgrades
Making your account load quickly is a top priority for us here at ManageFlitter. No one likes to wait!
18,000 unique accounts login to ManageFlitter every single day. That’s an average of 1 login every 5 seconds (with many more at peak times). As we process data on your entire Twitter account, that’s a lot of information we need to sift through! We need to process data on an average of 5,000 Twitter accounts every second in order to deliver your personal list of unfollowers to you. Phew!
Scaling ManageFlitter has been an ongoing challenge. We started off with a single server. 3 days later we appeared on Techcrunch and have been scrambling to add capacity ever since. ManageFlitter now runs across 72 unique server instances.
Recently we’ve noticed performance degrading, particularly on large accounts. We had hit an limit where we could not longer scale up our databases & provide more capacity for our ever growing user base. This was very frustrating for us. For some users, pages would not load at all. While it did not affect everyone, we’re really sorry to all those whom it did.
However, I’m pleased to announce that we have just completed a major upgrade of our entire backend service. We’ve fundamentally changed the way data is routed in our network and removed previous limits on adding new servers. We have also made huge improvements to the way we cache data we download from Twitter. This improvement means that your account will load significantly faster. We will also use significantly fewer API calls when downloading your information which means you can worry less about hitting Twitter’s hourly 350 API call limit.
These changes are even more exciting as they will allow us to deliver new features that have been sitting on the back-burner for a while as they needed the extra capacity of this new system. Stay tuned for more information soon.
There may still be a few more bumps over the next week as we continue to fine tune this new architecture, so please bear with us and as always let us know at [email protected] if you encounter any errors.