AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |
Back to Blog
Additionally, our existing data center IGP began to behave unexpectedly due to this increasing route scale and topology complexity. In early 2015 we started experiencing some growing pains due to changing service architecture and increased capacity needs, ultimately reaching physical scalability limits in the data center when a full mesh topology would not support additional hardware needed to add new racks. This supported the early version of Twitter through some notable engineering achievements like the TPS record we hit during Castle in the Sky and World Cup 2014.įast forward a few years and we were running a network with POPs on five continents and data centers with hundreds of thousands of servers. We had deep buffer ToRs to support bursty service traffic and carrier grade core switches with no oversubscription at that layer. We started to migrate from third party hosting in early 2010, which meant we had to learn how to build and run our infrastructure internally, and with limited visibility into the core infrastructure needs, we began iterating through various network designs, hardware, and vendors.īy late 2010, we finalized our first network architecture which was designed to address the scale and service issues we encountered in the hosted colo.
0 Comments
Read More
Leave a Reply. |