Performance optimization is a double-edged sword, with both good and bad aspects. The good side is that it can improve website performance, while the bad side is that it’s complicated to configure, or there are too many rules to follow. Additionally, some performance optimization rules aren’t suitable for all scenarios and should be used with caution. Readers should approach this article with a critical eye.
The references for the optimization suggestions in this article will be provided after each suggestion or at the end of the article.
1. Reduce HTTP Requests
A complete HTTP request needs to go through DNS lookup, TCP handshake, browser sending the HTTP request, server receiving the request, server processing the request and sending back a response, browser receiving the response, and other processes. Let’s look at a specific example to understand HTTP:
This is an HTTP request, and the file size is 28.4KB.
Terminology explained:
Queueing: Time spent in the request queue.
Stalled: The time difference between when the TCP connection is established and when data can actually be transmitted, including proxy negotiation time.
Proxy negotiation: Time spent negotiating with the proxy server.
DNS Lookup: Time spent performing DNS lookup. Each different domain on a page requires a DNS lookup.
Initial Connection / Connecting: Time spent establishing a connection, including TCP handshake/retry and SSL negotiation.
SSL: Time spent completing the SSL handshake.
Request sent: Time spent sending the network request, usually a millisecond.
Waiting (TFFB): TFFB is the time from when the page request is made until the first byte of response data is received.
Content Download: Time spent receiving the response data.
From this example, we can see that the actual data download time accounts for only 13.05 / 204.16 = 6.39% of the total. The smaller the file, the smaller this ratio; the larger the file, the higher the ratio. This is why it’s recommended to combine multiple small files into one large file, thereby reducing the number of HTTP requests.
Reference:
2. Use HTTP2
Compared to HTTP1.1, HTTP2 has several advantages:
Faster parsing
When parsing HTTP1.1 requests, the server must continuously read bytes until it encounters the CRLF delimiter. Parsing HTTP2 requests isn’t as complicated because HTTP2 is a frame-based protocol, and each frame has a field indicating its length.
Multiplexing
With HTTP1.1, if you want to make multiple requests simultaneously, you need to establish multiple TCP connections because one TCP connection can only handle one HTTP1.1 request at a time.
In HTTP2, multiple requests can share a single TCP connection, which is called multiplexing. Each request and response is represented by a stream with a unique stream ID to identify it. Multiple requests and responses can be sent out of order within the TCP connection and then reassembled at the destination using the stream ID.
From the two requests above, you can see that a lot of data is repeated. If we could store the same headers and only send the differences between them, we could save a lot of bandwidth and speed up the request time.
HTTP/2 uses “header tables” on the client and server sides to track and store previously sent key-value pairs, and for identical data, it’s no longer sent through each request and response.
Here’s a simplified example. Suppose the client sends the following header requests in sequence:
Header1:foo
Header2:bar
Header3:bat
When the client sends a request, it creates a table based on the header values:
Index
Header Name
Value
62
Header1
foo
63
Header2
bar
64
Header3
bat
If the server receives the request, it will create the same table. When the client sends the next request, if the headers are the same, it can directly send a header block like this:
62 63 64
The server will look up the previously established table and restore these numbers to the complete headers they correspond to.
Priority
HTTP2 can set a higher priority for more urgent requests, and the server can prioritize handling them after receiving such requests.
Flow control
Since the bandwidth of a TCP connection (depending on the network bandwidth from client to server) is fixed, when there are multiple concurrent requests, if one request occupies more traffic, another request will occupy less. Flow control can precisely control the flow of different streams.
Server push
A powerful new feature added in HTTP2 is that the server can send multiple responses to a single client request. In other words, in addition to responding to the initial request, the server can also push additional resources to the client without the client explicitly requesting them.
For example, when a browser requests a website, in addition to returning the HTML page, the server can also proactively push resources based on the URLs of resources in the HTML page.
Many websites have already started using HTTP2, such as Zhihu:
Where “h2” refers to the HTTP2 protocol, and “http/1.1” refers to the HTTP1.1 protocol.
References:
3. Use Server-Side Rendering
Client-side rendering: Get the HTML file, download JavaScript files as needed, run the files, generate the DOM, and then render.
Server-side rendering: The server returns the HTML file, and the client only needs to parse the HTML.
Pros: Faster first-screen rendering, better SEO.
Cons: Complicated configuration, increases the computational load on the server.
Below, I’ll use Vue SSR as an example to briefly describe the SSR process.
Client-side rendering process
Visit a client-rendered website.
The server returns an HTML file containing resource import statements and .
The client requests resources from the server via HTTP, and when the necessary resources are loaded, it executes new Vue() to instantiate and render the page.
Server-side rendering process
Visit a server-rendered website.
The server checks which resource files the current route component needs, then fills the content of these files into the HTML file. If there are AJAX requests, it will execute them for data pre-fetching and fill them into the HTML file, and finally return this HTML page.
When the client receives this HTML page, it can start rendering the page immediately. At the same time, the page also loads resources, and when the necessary resources are fully loaded, it begins to execute new Vue() to instantiate and take over the page.
From the two processes above, we can see that the difference lies in the second step. A client-rendered website will directly return the HTML file, while a server-rendered website will render the page completely before returning this HTML file.
What’s the benefit of doing this? It’s a faster time-to-content.
Suppose your website needs to load four files (a, b, c, d) to render completely. And each file is 1 MB in size.
Calculating this way: a client-rendered website needs to load 4 files and an HTML file to complete the home page rendering, totaling 4MB (ignoring the HTML file size). While a server-rendered website only needs to load a fully rendered HTML file to complete the home page rendering, totaling the size of the already rendered HTML file (which isn’t usually too large, generally a few hundred KB; my personal blog website (SSR) loads an HTML file of 400KB). This is why server-side rendering is faster.
References:
4. Use CDN for Static Resources
A Content Delivery Network (CDN) is a set of web servers distributed across multiple geographic locations. We all know that the further the server is from the user, the higher the latency. CDNs are designed to solve this problem by deploying servers in multiple locations, bringing users closer to servers, thereby shortening request times.
CDN Principles
When a user visits a website without a CDN, the process is as follows:
The browser needs to resolve the domain name into an IP address, so it makes a request to the local DNS.
The local DNS makes successive requests to the root server, top-level domain server, and authoritative server to get the IP address of the website’s server.
The local DNS sends the IP address back to the browser, and the browser makes a request to the website server’s IP address and receives the resources.
If the user is visiting a website that has deployed a CDN, the process is as follows:
The browser needs to resolve the domain name into an IP address, so it makes a request to the local DNS.
The local DNS makes successive requests to the root server, top-level domain server, and authoritative server to get the IP address of the Global Server Load Balancing (GSLB) system.
The local DNS then makes a request to the GSLB. The main function of the GSLB is to determine the user’s location based on the local DNS’s IP address, filter out the closest local Server Load Balancing (SLB) system to the user, and return the IP address of that SLB to the local DNS.
The local DNS sends the SLB’s IP address back to the browser, and the browser makes a request to the SLB.
The SLB selects the optimal cache server based on the resource and address requested by the browser and sends it back to the browser.
The browser then redirects to the cache server based on the address returned by the SLB.
If the cache server has the resource the browser needs, it sends the resource back to the browser. If not, it requests the resource from the source server, sends it to the browser, and caches it locally.
References:
5. Place CSS in the head and JavaScript Files at the Bottom
CSS execution blocks rendering and prevents JS execution
JS loading and execution block HTML parsing and prevent CSSOM construction
If these CSS and JS tags are placed in the HEAD tag, and they take a long time to load and parse, then the page will be blank. Therefore, JS files should be placed at the bottom (not blocking DOM parsing but will block rendering) so that HTML parsing is completed before loading JS files, presenting the page content to the user as early as possible.
So why should CSS files still be placed in the head?
Because loading HTML first and then loading CSS will make users see an unstyled, “ugly” page at first glance. To avoid this situation, CSS files should be placed in the head.
Additionally, JS files can also be placed in the head as long as the script tag has the defer attribute, which means asynchronous download and delayed execution.
Reference:
6. Use Font Icons (iconfont) Instead of Image Icons
A font icon is an icon made into a font. When using it, it’s just like a font, and you can set attributes such as font-size, color, etc., which is very convenient. Moreover, font icons are vector graphics and won’t lose clarity. Another advantage is that the generated files are particularly small.
7. Make Good Use of Caching, Avoid Reloading the Same Resources
To prevent users from having to request files every time they visit a website, we can control this behavior by adding Expires or max-age. Expires sets a time, and as long as it’s before this time, the browser won’t request the file but will directly use the cache. Max-age is a relative time, and it’s recommended to use max-age instead of Expires.
However, this creates a problem: what happens when the file is updated? How do we notify the browser to request the file again?
This can be done by updating the resource link addresses referenced in the page, making the browser actively abandon the cache and load new resources.
The specific approach is to associate the URL modification of the resource address with the file content, which means that only when the file content changes, the corresponding URL will change, thereby achieving file-level precise cache control. What is related to file content? We naturally think of using digest algorithms to derive digest information for the file. The digest information corresponds one-to-one with the file content, providing a basis for cache control that’s precise to the granularity of individual files.
References:
8. Compress Files
Compressing files can reduce file download time, providing a better user experience.
Thanks to the development of webpack and node, file compression is now very convenient.
In webpack, the following plugins can be used for compression:
JavaScript: UglifyPlugin
CSS: MiniCssExtractPlugin
HTML: HtmlWebpackPlugin
In fact, we can do even better by using gzip compression. This can be enabled by adding the gzip identifier to the Accept-Encoding header in the HTTP request header. Of course, the server must also support this feature.
Gzip is currently the most popular and effective compression method. For example, the app.js file generated after building a project I developed with Vue has a size of 1.4MB, but after gzip compression, it’s only 573KB, reducing the volume by nearly 60%.
Here are the methods for configuring gzip in webpack and node.
const compression = require('compression')
// Use before other middleware
app.use(compression())
9. Image Optimization
(1). Lazy Loading Images
In a page, don’t initially set the path for images, only load the actual image when it appears in the browser’s viewport. This is lazy loading. For websites with many images, loading all images at once can have a significant impact on user experience, so image lazy loading is necessary.
First, set up the images like this, where images won’t load when they’re not visible in the page:
For example, if you have a 1920 * 1080 size image, you show it to users as a thumbnail, and only display the full image when users hover over it. If users never actually hover over the thumbnail, the time spent downloading the image is wasted.
Therefore, we can optimize this with two images. Initially, only load the thumbnail, and when users hover over the image, then load the large image. Another approach is to lazy load the large image, manually changing the src of the large image to download it after all elements have loaded.
(4). Reduce Image Quality
For example, with JPG format images, there’s usually no noticeable difference between 100% quality and 90% quality, especially when used as background images. When cutting background images in PS, I often cut the image into JPG format and compress it to 60% quality, and basically can’t see any difference.
There are two compression methods: one is through the webpack plugin image-webpack-loader, and the other is through online compression websites.
Here’s how to use the webpack plugin image-webpack-loader:
npm i -D image-webpack-loader
webpack configuration
{test:/\.(png|jpe?g|gif|svg)(\?.*)?$/,use:[{loader:'url-loader',options:{limit:10000,/* Images smaller than 1000 bytes will be automatically converted to base64 code references */name:utils.assetsPath('img/[name].[hash:7].[ext]')}},/* Compress images */{loader:'image-webpack-loader',options:{bypassOnDebug:true,}}]}
(5). Use CSS3 Effects Instead of Images When Possible
Many images can be drawn with CSS effects (gradients, shadows, etc.), and in these cases, CSS3 effects are better. This is because code size is usually a fraction or even a tenth of the image size.
Reference:
(6). Use webp Format Images
WebP’s advantage is reflected in its better image data compression algorithm, which brings smaller image volume while maintaining image quality that’s indistinguishable to the naked eye. It also has lossless and lossy compression modes, Alpha transparency, and animation features. Its conversion effects on JPEG and PNG are quite excellent, stable, and uniform.
Reference:
10. Load Code on Demand Through Webpack, Extract Third-Party Libraries, Reduce Redundant Code When Converting ES6 to ES5
Lazy loading or on-demand loading is a great way to optimize a website or application. This approach actually separates your code at some logical breakpoints, and then immediately references or is about to reference some new code blocks after completing certain operations in some code blocks. This speeds up the initial loading of the application and lightens its overall volume because some code blocks may never be loaded.
Generate File Names Based on File Content, Combined with Import Dynamic Import of Components to Achieve On-Demand Loading
This requirement can be achieved by configuring the filename property of output. One of the value options in the filename property is [contenthash], which creates a unique hash based on file content. When the file content changes, [contenthash] also changes.
Since imported third-party libraries are generally stable and don’t change frequently, extracting them separately as long-term caches is a better choice. This requires using the cacheGroups option of webpack4’s splitChunk plugin.
optimization:{runtimeChunk:{name:'manifest'// Split webpack's runtime code into a separate chunk.},splitChunks:{cacheGroups:{vendor:{name:'chunk-vendors',test:/[\\/]node_modules[\\/]/,priority:-10,chunks:'initial'},common:{name:'chunk-common',minChunks:2,priority:-20,chunks:'initial',reuseExistingChunk:true}},}},
test: Used to control which modules are matched by this cache group. If passed unchanged, it defaults to select all modules. Types of values that can be passed: RegExp, String, and Function;
priority: Indicates extraction weight, with higher numbers indicating higher priority. Since a module might meet the conditions of multiple cacheGroups, extraction is determined by the highest weight;
reuseExistingChunk: Indicates whether to use existing chunks. If true, it means that if the current chunk contains modules that have already been extracted, new ones won’t be generated.
minChunks (default is 1): The minimum number of times this code block should be referenced before splitting (note: to ensure code block reusability, the default strategy doesn’t require multiple references to be split)
chunks (default is async): initial, async, and all
name (name of the packaged chunks): String or function (functions can customize names based on conditions)
Reduce Redundant Code When Converting ES6 to ES5
To achieve the same functionality as the original code after Babel conversion, some helper functions are needed, for example:
classPerson{}
will be converted to:
"use strict";function_classCallCheck(instance,Constructor){if (!(instanceinstanceofConstructor)){thrownewTypeError("Cannot call a class as a function");}}varPerson=functionPerson(){_classCallCheck(this,Person);};
Here, _classCallCheck is a helper function. If classes are declared in many files, then many such helper functions will be generated.
The @babel/runtime package declares all the helper functions needed, and the role of @babel/plugin-transform-runtime is to import all files that need helper functions from the @babel/runtime package:
Here, the helper function classCallCheck is no longer compiled, but instead references helpers/classCallCheck from @babel/runtime.
Installation
npm i -D @babel/plugin-transform-runtime @babel/runtime
Usage In the .babelrc file
"plugins": [
"@babel/plugin-transform-runtime"
]
References:
11. Reduce Reflows and Repaints
Browser Rendering Process
Parse HTML to generate DOM tree.
Parse CSS to generate CSSOM rules tree.
Combine DOM tree and CSSOM rules tree to generate rendering tree.
Traverse the rendering tree to begin layout, calculating the position and size information of each node.
Paint each node of the rendering tree to the screen.
Reflow
When the position or size of DOM elements is changed, the browser needs to regenerate the rendering tree, a process called reflow.
Repaint
After regenerating the rendering tree, each node of the rendering tree needs to be painted to the screen, a process called repaint. Not all actions will cause reflow, for example, changing font color will only cause repaint. Remember, reflow will cause repaint, but repaint will not cause reflow.
Both reflow and repaint operations are very expensive because the JavaScript engine thread and the GUI rendering thread are mutually exclusive, and only one can work at a time.
What operations will cause reflow?
Adding or removing visible DOM elements
Element position changes
Element size changes
Content changes
Browser window size changes
How to reduce reflows and repaints?
When modifying styles with JavaScript, it’s best not to write styles directly, but to replace classes to change styles.
If you need to perform a series of operations on a DOM element, you can take the DOM element out of the document flow, make modifications, and then bring it back to the document. It’s recommended to use hidden elements (display:none) or document fragments (DocumentFragement), both of which can implement this approach well.
12. Use Event Delegation
Event delegation takes advantage of event bubbling, allowing you to specify a single event handler to manage all events of a particular type. All events that use buttons (most mouse events and keyboard events) are suitable for the event delegation technique. Using event delegation can save memory.
A well-written computer program often has good locality; it tends to reference data items near recently referenced data items or the recently referenced data items themselves. This tendency is known as the principle of locality. Programs with good locality run faster than those with poor locality.
Locality usually takes two different forms:
Temporal locality: In a program with good temporal locality, memory locations that have been referenced once are likely to be referenced multiple times in the near future.
Spatial locality: In a program with good spatial locality, if a memory location has been referenced once, the program is likely to reference a nearby memory location in the near future.
Looking at the two spatial locality examples above, the method of accessing each element of the array sequentially starting from each row, as shown in the examples, is called a reference pattern with a stride of 1. If in an array, every k elements are accessed, it’s called a reference pattern with a stride of k. Generally, as the stride increases, spatial locality decreases.
What’s the difference between these two examples? The difference is that the first example scans the array by row, scanning one row completely before moving on to the next row; the second example scans the array by column, scanning one element in a row and immediately going to scan the same column element in the next row.
Arrays are stored in memory in row order, resulting in the example of scanning the array row by row getting a stride-1 reference pattern with good spatial locality; while the other example has a stride of rows, with extremely poor spatial locality.
Performance Testing
Running environment:
CPU: i5-7400
Browser: Chrome 70.0.3538.110
Testing spatial locality on a two-dimensional array with a length of 9000 (child array length also 9000) 10 times, taking the average time (milliseconds), the results are as follows:
The examples used are the two spatial locality examples mentioned above.
Stride 1
Stride 9000
124
2316
From the test results above, the array with a stride of 1 executes an order of magnitude faster than the array with a stride of 9000.
Conclusion:
Programs that repeatedly reference the same variables have good temporal locality
For programs with a reference pattern with a stride of k, the smaller the stride, the better the spatial locality; while programs that jump around in memory with large strides will have very poor spatial locality
Reference:
14. if-else vs switch
As the number of judgment conditions increases, it becomes more preferable to use switch instead of if-else.
if (color=='blue'){}elseif (color=='yellow'){}elseif (color=='white'){}elseif (color=='black'){}elseif (color=='green'){}elseif (color=='orange'){}elseif (color=='pink'){}switch (color){case'blue':breakcase'yellow':breakcase'white':breakcase'black':breakcase'green':breakcase'orange':breakcase'pink':break}
In situations like the one above, from a readability perspective, using switch is better (JavaScript’s switch statement is not based on hash implementation but on loop judgment, so from a performance perspective, if-else and switch are the same).
15. Lookup Tables
When there are many conditional statements, using switch and if-else is not the best choice. In such cases, you might want to try lookup tables. Lookup tables can be constructed using arrays and objects.
Currently, most devices have a screen refresh rate of 60 times/second. Therefore, if there’s an animation or gradient effect on the page, or if the user is scrolling the page, the browser needs to render animations or pages at a rate that matches the device’s screen refresh rate. The budget time for each frame is just over 16 milliseconds (1 second / 60 = 16.66 milliseconds). But in reality, the browser has housekeeping work to do, so all your work needs to be completed within 10 milliseconds. If you can’t meet this budget, the frame rate will drop, and content will jitter on the screen. This phenomenon is commonly known as stuttering and has a negative impact on user experience.
Suppose you use JavaScript to modify the DOM, trigger style changes, go through reflow and repaint, and finally paint to the screen. If any of these takes too long, it will cause the rendering time of this frame to be too long, and the average frame rate will drop. Suppose this frame took 50 ms, then the frame rate would be 1s / 50ms = 20fps, and the page would appear to stutter.
For some long-running JavaScript, we can use timers to split and delay execution.
for (leti=0,len=arry.length;i<len;i++){process(arry[i])}
Suppose the loop structure above takes too long due to either the high complexity of process() or too many array elements, or both, you might want to try splitting.
17. Use requestAnimationFrame to Implement Visual Changes
From point 16, we know that most devices have a screen refresh rate of 60 times/second, which means the average time per frame is 16.66 milliseconds. When using JavaScript to implement animation effects, the best case is that the code starts executing at the beginning of each frame. The only way to ensure JavaScript runs at the beginning of a frame is to use requestAnimationFrame.
/**
* If run as a requestAnimationFrame callback, this
* will be run at the start of the frame.
*/functionupdateScreen(time){// Make visual updates here.}requestAnimationFrame(updateScreen);
If you use setTimeout or setInterval to implement animations, the callback function will run at some point in the frame, possibly right at the end, which can often cause us to miss frames, leading to stuttering.
Reference:
18. Use Web Workers
Web Workers use other worker threads to operate independently of the main thread. They can perform tasks without interfering with the user interface. A worker can send messages to the JavaScript code that created it by sending messages to the event handler specified by that code (and vice versa).
Web Workers are suitable for processing pure data or long-running scripts unrelated to the browser UI.
Creating a new worker is simple, just specify a script URI to execute the worker thread (main.js):
varmyWorker=newWorker('worker.js');// You can send messages to the worker through the postMessage() method and onmessage eventfirst.onchange=function(){myWorker.postMessage([first.value,second.value]);console.log('Message posted to worker');}second.onchange=function(){myWorker.postMessage([first.value,second.value]);console.log('Message posted to worker');}
In the worker, after receiving the message, we can write an event handler function code as a response (worker.js):
onmessage=function(e){console.log('Message received from main script');varworkerResult='Result: '+(e.data[0]*e.data[1]);console.log('Posting message back to main script');postMessage(workerResult);}
The onmessage handler function executes immediately after receiving the message, and the message itself is used as the data property of the event. Here we simply multiply the two numbers and use the postMessage() method again to send the result back to the main thread.
Back in the main thread, we use onmessage again to respond to the message sent back from the worker:
myWorker.onmessage=function(e){result.textContent=e.data;console.log('Message received from worker');}
Here we get the data from the message event and set it as the textContent of result, so the user can directly see the result of the calculation.
Note that inside the worker, you cannot directly manipulate DOM nodes, nor can you use the default methods and properties of the window object. However, you can use many things under the window object, including data storage mechanisms such as WebSockets, IndexedDB, and Firefox OS-specific Data Store API.
Reference:
19. Use Bitwise Operations
Numbers in JavaScript are stored in 64-bit format using the IEEE-754 standard. But in bitwise operations, numbers are converted to 32-bit signed format. Even with the conversion, bitwise operations are much faster than other mathematical and boolean operations.
Modulo
Since the lowest bit of even numbers is 0 and odd numbers is 1, modulo operations can be replaced with bitwise operations.
if (value%2){// Odd number}else{// Even number }// Bitwise operationif (value&1){// Odd number}else{// Even number}
By defining these options, you can use the bitwise AND operation to determine if a/b/c is in the options.
// Is option b in the options?if (b&options){...}
20. Don’t Override Native Methods
No matter how optimized your JavaScript code is, it can’t match native methods. This is because native methods are written in low-level languages (C/C++) and compiled into machine code, becoming part of the browser. When native methods are available, try to use them, especially for mathematical operations and DOM manipulations.
21. Reduce the Complexity of CSS Selectors
(1). When browsers read selectors, they follow the principle of reading from right to left.
Let’s look at an example
#block.textp{color:red;}
Find all P elements.
Check if the elements found in result 1 have parent elements with class name “text”
Check if the elements found in result 2 have parent elements with ID “block”
(2). CSS selector priority
Inline > ID selector > Class selector > Tag selector
Based on the above two pieces of information, we can draw conclusions.
The shorter the selector, the better.
Try to use high-priority selectors, such as ID and class selectors.
Avoid using the universal selector *.
Finally, I should say that according to the materials I’ve found, there’s no need to optimize CSS selectors because the performance difference between the slowest and fastest selectors is very small.
References:
22. Use Flexbox Instead of Earlier Layout Models
In early CSS layout methods, we could position elements absolutely, relatively, or using floats. Now, we have a new layout method flexbox, which has an advantage over earlier layout methods: better performance.
The screenshot below shows the layout cost of using floats on 1300 boxes:
Then we recreate this example using flexbox:
Now, for the same number of elements and the same visual appearance, the layout time is much less (3.5 milliseconds versus 14 milliseconds in this example).
However, flexbox compatibility is still an issue, not all browsers support it, so use it with caution.
Browser compatibility:
Chrome 29+
Firefox 28+
Internet Explorer 11
Opera 17+
Safari 6.1+ (prefixed with -webkit-)
Android 4.4+
iOS 7.1+ (prefixed with -webkit-)
Reference:
23. Use Transform and Opacity Properties to Implement Animations
In CSS, transforms and opacity property changes don’t trigger reflow and repaint, they are properties that can be processed by the compositor alone.
Reference:
24. Use Rules Reasonably, Avoid Over-Optimization
Performance optimization is mainly divided into two categories:
Load-time optimization
Runtime optimization
Of the 23 suggestions above, the first 10 belong to load-time optimization, and the last 13 belong to runtime optimization. Usually, there’s no need to apply all 23 performance optimization rules. It’s best to make targeted adjustments based on the website’s user group, saving effort and time.
Before solving a problem, you need to identify the problem first, otherwise you won’t know where to start. So before doing performance optimization, it’s best to investigate the website’s loading and running performance.
Check Loading Performance
A website’s loading performance mainly depends on white screen time and first screen time.
White screen time: The time from entering the URL to when the page starts displaying content.
First screen time: The time from entering the URL to when the page is completely rendered.
You can get the white screen time by placing the following script before .
newDate()-performance.timing.navigationStart// You can also use domLoading and navigationStartperformance.timing.domLoading-performance.timing.navigationStart
You can get the first screen time by executing new Date() - performance.timing.navigationStart in the window.onload event.
Check Runtime Performance
With Chrome’s developer tools, we can check the website’s performance during runtime.
Open the website, press F12 and select performance, click the gray dot in the upper left corner, it turns red to indicate it has started recording. At this point, you can simulate users using the website, and after you’re done, click stop, then you’ll see the website’s performance report during the runtime. If there are red blocks, it means there are frame drops; if it’s green, it means the FPS is good. For detailed usage of performance, please search using a search engine, as the scope is limited.
By checking the loading and runtime performance, I believe you already have a general understanding of the website’s performance. So what you need to do now is to use the 23 suggestions above to optimize your website. Go for it!