What is Application Scalability?
Understanding System Scalability – A High Scalability Blog
July 27, 2019 | XEO
"Your site needs to scale, when it needs to scale" - Xeo
What is Scalability Anyway?
Scalability is the ability to scale with the current site or app usage. Scaling a system up is done by throwing hardware at the problem.
However, this means spending more money to the hosting company than is needed. Scaling a system out is the process of adding capacity where
it is needed with the flexibility to alter that capacity during faster and slow periods. AWS Elastic Beanstalk is a classic example
of a scale out solution; the beanstalk knows how to add capacity and remove capacity and does this when events happen such as increased
CPU usage or simultaneous logged in user counts change. Good software design takes the future growth into consideration and future proofs
the application prudently. This is our high scalability blog to help you get an understanding of this topic.
Can't We Just Scale Later?
Scalability of a system is often ignored by website builder and corporate blog builder services. These services are aimed at getting a glitzy
marketing site up as quickly as possible to start taking orders. So we understand that scalability only comes to mind when the site
starts acting really slow or frequently becomes unavailable.
Determining how fast to add website scalability is a key function of Product Management as they analyze the market. Building software is a complex multi-sided equation largely determined by the project budget. The project budget is itself greatly dependent on revenue and usage. This strategic question comes up a lot: Is it worth spending money on future proofing the software before there is significant revenue and usage? From an Agile and Lean perspective, yes this can always be added later. However, we have found it is better to plan the scalability system design early on and watch key usage growth metrics to know when to add on each step. The reality is that scaling changes are significant changes to the software itself and will take longer than expected to stabilize. Taking the time to scale each piece correctly is important to offer the best possible customer service, but if you wait too long to get started, customers may have to deal with a slow and clunky service while you rush the scalability work. Rushed coding work means it will need to be patched and refactored later on so you may end up paying for it twice.
Starting at $0.00
We apply Lean principals during Web App Development to scale up to your expected usage.
During initial development there are just a few test users. Initial scale can typically be handled by a single server running multiple server roles.
On the Amazon Web Services (AWS) platform, the free tier hosting level keeps costs low and some clients are seeing $0.00 per month.
We build using best practices including the capability to scale out, but do not activate those elements until the business needs to start paying for them.
What to Scale?
There are four main assets to make into a scaling system in a website or mobile application: code, database, session state, and user data.
- 1. Code is always scalable. Our architecture secures the code in a source code repository (typically bitbucket or github) and we configure the server to pulls the latest version whenever updates are available. This means that the code is now flexible and can be placed on any number of servers as needed.
- 2. Database can be moved. Our database best practices design initially places the database on the web server itself for simplicity and cost savings. As usage grows we simply move that database to its own server or Amazon RDS and choose the server size needed.
- 3. Session state is how the server knows what a user is doing on the site. Logged in users have a session to track that they are logged in. Initially this session data is stored on the web server. As usage grows we simply move this to Redis so that multiple web servers can share this data.
- 4. User data is always scalable. Our architecture secures these user uploaded files directly into S3 storage which is the same place
that Netflix and Dropbox store their files. Since this is not on a server or in a database, this is already scaled from day 1.
Things to Scale?
There are four main aspects to consider about scaling out. They are: role separation, shared caching, deferred processing, and optimization.
- 1. Role Separation: This is the exercise of separating out functions into separate server clusters. By cluster we mean one or more
servers where the number can be adjusted to match usage. Typical role separation includes splitting out into:
- A. Front End Web Server: This server handles the website traffic - all the html, js and css work. These servers sit behind a load balancer which chooses which server to send each web request to. As the number of requests grows, add more servers to the cluster.
- B. Back End Server: This server handles background tasks that are not time critical. This offloads work from the front end servers. Common tasks for the backend server are sending emails and text messages, creating PDF reports, handling daily maintenance tasks.
- C. Admin Web Servers. Admins do funny things. They ask hard data questions like show me all the open invoices since May that have an unpaid balance and then regenerate all these invoices and email them to the clients. This one task may take an hour to accomplish and it would be unfortunate if the front end web servers were unavailable for that hour while this tasks is completed. Hence, separating out admin functions to a separate server gives the flexibility to perform more complex tasks without impacting live site users.
- D. Reporting Data. Management reports may require access to comprehensive amounts of data to run analysis of sales trends by item over months and years. At the same time, the front end web server may no longer need to acccess this old data as it focuses on generating new data. Therefore, this old data can be pulled off the live database and copied to a reporting database for further analysis.
- E. API Server. Web Apps Developed in Vue and React use a lot of API calls to fetch data. This API layer is sometimes separated from the web server which then only serves HTML, CSS, and JS while this API server serves JSON.
- 2. Shared Caching: Caching simply means to keep a copy lying around and reusing that copy instead of asking for the information all the time. Shared caching means that multiple servers can share that same cache. Redis, Memcached, CDN, and browser each provide their own caches. Taking advantage of these caches is an intensive process of bundling and reusing data but also knowing when the data has changed so you grab a fresh copy as needed. Most of these caches live in memory which is over 20x faster than disk storage.
- 3. Deferred Processing: This is the exercise of splitting complex tasks into simple tasks and doing as much as possible later on. One simple example is a tasks that generates one or more email alerts.. The user wants to complete the task quickly yet simply sending an email may take 30 seconds each to complete the entire process. Why does the user need to wait a minute before they can do their next tasks on your website just to send an email. To scale, pop that email on a queue and have the backend server process it as soon as it can.
- 4. Optimization. This is the slow and tedious process of hand crafting more efficient code that completes the task more quickly. This is
a slow process that can yield incredible results. But it requires deep changes to the code which requires comprehensive testing to make sure
that everything still works
Tricks to Scale Cheaply
Here are a few simple things tricks to build into your site to be performant from the start.
- 1. Building in a robust framework such as Laravel has some built in advantages.
- 2. Laravel offers a simple configuration change to move sessions from disk to Redis.
- 3. Laravel provides Laravel Mix which better manages all the extra css and js files, decreasing load times and the total number of server requests.
- 4. Nginx is a lightweight web server which offers performance advantages over Apache.
- 5. PHP-FPM is an efficient multiple request manager that is compatible with Nginx.
- 6. Cloudflare (and other CDNs) offer edge caching which typically takes care of 50% of the web requests so they never hit your servers
- 7. AWS S3 is a great place to store images and other static files so they also never require any front end web server involvement
- 8. AWS SQS is a robust queue that enables deferred processing to be handled by backend servers
- 9. Database Indexes are a key method of database optimization and should be reviewed frequently
- 10. Have processes in place to observe slow pages and actions to prioritize optimization efforts for maximum customer impact
During the post mortem meeting on the release one of the client partners raise this high compliment for the XeoDev co-founders: 'David contributed as if he were a founder of our company, not as a vendor; and I really respect that.' That sums up our intended relationship model: we would like to be your technology partner and bring that founder mindset into your business to deliver complex redesign projects together. XeoDev provides SaaS and Mobile Android and iOS App platforms for companies ranging from solopreneurs to unicorns. When you need this type of technology for your business get started by requesting a quote.
- 1. Role Separation: This is the exercise of separating out functions into separate server clusters. By cluster we mean one or more servers where the number can be adjusted to match usage. Typical role separation includes splitting out into: