Front and back-end performance have a significant (if not the most) impact when addressing the user experience of a website or any kind of digital platform. Long response times can lead users to lose their attention and eventually even abandon the process you wanted them to go through.
Subscribe to our blog
User experience (UX) is an essential part of a digital platform. If users are pleased with the way they interact with a product or a service, either online or in the physical world, chances are high that they will use it more and recommend it to their peers. When addressing the user experience of a website, or any kind of digital platform, amongst characteristics like accessibility, clarity and delight, speed has a significant (if not the most) impact (see image 1). Long response times can lead users to lose their attention and eventually even abandon the process you wanted them to go through.
Image 1: important elements of UX
Speed and responsiveness are not only essential from the end-user’s perspective (visitors, prospects or customers), but also from the back-office standpoint, considering supporting roles such as content authors or webmasters. For those, any system delay will not only impact their own productivity but might even affect key business areas: Just imagine the launch of a marketing campaign or the update of some product’s specifications, when timely content publication is crucial.
Defining web performance
In most projects, web performance is covered in the list of technical requirements. But defining the desired performance of a web application can be a challenge and needs to be discussed thoroughly with all stakeholders.
A very basic first attempt for defining performance of a website might be:
All web pages should load under 2 sec
This requirement is far from optimal. To be used as a valid technical requirement, the definition of web performance must be refined and made stricter, taking into account the different elements that can impact on its analysis:
- For example, a recent content update will influence the loading time of a webpage, making it slower as content needs to be rendered again and stored in cache. On the contrary, loading a page from cache results in a much better performance as information loads much faster – even if being the same page.
- The user's connection speed is another key element to consider. A slower connection means content takes longer to download, even not being directly related to the web page performance. This will end up affecting the user experience.
- The moment when a page is considered "loaded" is not defined, so it may vary depending on the criteria used.
Due to the many factors at stake, it is more appropriate to consider load times to be heavily distributed and take this fact into account when defining performance requirements (see image 2).
Image 2: distribution of load times for a web page (source: Google)
As a result, a much more narrowed down definition could be:
"The Speed Index for 95% of page loads should be within 3 secs on a fast 3G connection (25 Mbps). The Speed Index for the remaining 5% of page loads should be within 6 secs on the same connection type."
This definition allows to explain performance tests much more accurately and reduces unnecessary discussions down the road.
In the definition above, the Speed Index is the average time at which visible parts of the page are displayed (visual completeness). It takes into account the user-perceived performance and therefore conveys user experience.
In image 3 below, the visible part of the website is fully displayed after 445ms and the user can start consuming or interacting with the content. Meanwhile, the rest of the page is loaded without impacting the user.
Image 3: various stages of the page loading process
Not only the time needed to deliver page content from the web server to the browser is important, also the efficiency with which the browser can render the contents of the page has a substantial impact on the total load time.
When talking about web performance, two areas are often distinguished: front-end performance covers all elements of the page loading process which directly involves rendering by the browser. Back-end performance on the other hand covers the operations executed by the server infrastructure to build page content.
Nowadays, there are many tools available to support the auditing of this process. These tools provide a thorough insight into the page rendering process as being executed by the browser. A popular example is Chrome DevTools (see image 4), which allows to not only look into page structure but also gives a view into the process of downloading resources over the network, including simulation of various network speeds.
Image 4: Chrome DevTools
From our experience, the following elements are the biggest bottlenecks impacting front-end performance:
- Resources blocking the rendering of the page (slow CRP or Critical Rendering Path): these embedded resources block DOM (Document Object Model) construction and page rendering. Reducing these can significantly improve website speed enabling the visitor to view and interact with the page content in a shorter time (Speed Index).
- Wrong cache settings: caching is a powerful technique to reduce latency and improve performance. By optimizing cache settings, unnecessary rendering or downloads can be reduced.
Image 5: Timing for a browser page request/response
In the illustration above, the TTFB (Time To First Byte) is composed of the following elements:
- Connection latency
- Connection speed
- Time required by the server to render and serve the resource
In the context of back-end performance, we are particularly interested in the influence of web server performance on the TTFB of the main HTML document. If this number is high (Google recommends keeping this number under 200 ms), optimizations to the back-end might be required.
Important factors in this area are:
- Optimizations to the page rendering process in the CMS
- Reconfiguring software components lower in the server stack (application container, database, caching back- end, etc.)
- Restructuring of the hardware infrastructure (load balancing, etc.)
A word about caching
Caching is often introduced to mask low back-end performance. While this might work in certain circumstances, it should not be relied upon as a general solution to solve performance issues as there are several situations were page requests cannot be handled by caching systems. For example, when the page requested is not (yet) loaded in cache or the request requires personalized content to be delivered (think about authenticated visitors or content authors).
For this reason, caching should only be used to improve a system which is already performing well on itself. A good use case for caching could be to introduce a reverse proxy cache like Varnish to handle extreme peaks of traffic. This setup will minimize load on the back-end infrastructure.
Performance testing should be a continuous and automated effort from the first step of building an application, to prevent unpleasant surprises in later phases of the project.
When doing performance optimizations to an existing application, it is crucial to measure progress by executing performance tests before and after changes are being made.
In any situation, it is important to define with precision the goal of the performance test: what exactly is to be tested and under which circumstances.
Example: load testing with JMeter
Let's say we want to analyze the time needed for the back-end system to generate often visited web pages under a load of 500 concurrent users. We are not interested in the browser rendering process nor connection speed/latency. A good tool to support this load test is Apache JMeter.
In this specific example, we consider the following:
- Consulting the analytics tool (if available) to check which pages are most visited. These pages can be included in the load testing scenario.
- Execution of the load test from within the same data center as the server infrastructure, to eliminate connection properties which might impact the outcome.
- Configuring JMeter to not download embedded resources (images, CSS, etc.), since we are only interested in back-end efficiency for rendering HTML.
- Check in the analytics tool how long users stay on the website and how many pages they visit during their session. Using these metrics, we can deduce the average rate at which visitors navigate the website and add a JMeter timer accordingly.
The way we configure the load testing tool is essential to obtain representative results and draw correct conclusions.
User experience has been gaining incomparable relevance as a differentiating element for any business. As a result, the performance of digital platforms also becomes more and more critical for websites and applications. It is essential to define unambiguous requirements and set up automated load tests to verify performance at multiple stages in the project.
In this article we have identified the different factors that impact performance, distinguishing between front and back-end performance of the websites. Issues with the front-end are often less complex to identify and resolve compared to back-end issues, since knowledge in various areas is needed to properly audit back-end performance (such as expertise in the CMS used, hardware infrastructure or application stack).
About the author
Jan Lemmens is a DXM and ECM Consultant at AMPLEXOR, based in Belgium. As an enthusiast for platform-independent design and open source technology, Jan focuses in architecting and building innovative, cost-effective and user-friendly solutions for Enterprise customers. He has been responsible for several successful Drupal API and architecture projects.