Skip To Main Content

The case for WooCommerce database optimisation

Many websites rely on a database to manage and store their content. This is an entirely separate application to the software that displays the website. Depending on its usage and structure (aka schema), it may well perform quite adequately for the task given to it. It’s important, therefore, to adequately consider database optimisation.

For database driven sites with only a few dozen pages, the typical CMS (content management system), e.g. WordPress, probably doesn’t tax the database too much. There is often a hefty cache system that prevents the queries required to deliver the data to the web pages being re-run.

However, there are other services a CMS might perform, which require frequent new queries to be run, or at least repeated queries over new data. For example on an e-commerce site processing product information, related products, promotions, search queries and online orders. These cannot benefit from a cache. 

We look at the options to improve the responsiveness of a site that has begun to slow down, focusing on WooCommerce, the most common E-commerce platform for WordPress. 

When WooCommerce performance decreases

As a WooCommerce catalogue grows, with supporting data for colour, size, use cases etc, the pressure on the database application increases, resulting in higher CPU and memory usage. At this point a business might find their E-commerce site has a noticeable time increase in serving data to the customer, leading to a decrease in customer satisfaction. This is often after several years’ growth, or a sizeable catalogue increase. 

Page and post content will also squeeze performance, as many of the same core tables are shared between WooCommerce and regular WordPress content. This is a good reason for keeping shop and ‘blog’ functionality separate, either as two separate WordPress sites, or a WordPress multi-site.

What is “sizeable”? We’ve seen WooCommerce sites show signs of strain when the WordPress wp_postmeta table exceeds four or five million rows (its database export file  might be c1Gb). The postmeta table is intrinsic to WordPress, and historically, to WooCommerce also – it stores additional information for posts, or products in WooCommerce’s case. The number of products is not necessarily the indicator for performance issues. Sometimes a product catalogue has a lot of related data – what WooCommerce calls ‘product attributes’ (that is, metadata about the product: dimensions, weight, colours, usage, etc.). 

Another key factor is the number and type of plugins that are installed. Plugins that add functionality around the order and checkout process create more data and can also introduce more complex queries, eg pulling in multiple address details.

An unoptimised database can produce acute problems during the order creation process, leaving customers waiting tens of seconds to checkout. 

Frustrated users might then create duplicate orders, assuming that their order has stalled or stopped. They then restart the process, only to be charged twice for their efforts. Not surprisingly, they may go to a rival store in the future.

This scenario is often due to database degradation. Sometimes the problem is third party integrations. What can be done?

Ideally, the CMS or e-commerce database schema would have been prepared in advance for the expansion of the data and user base. Often legacy schemas weren’t. In any case, demands on the database change frequently and in unanticipated ways, such as:

  • third party services are hooked into the site (using the API).
  • A new plugin creates queries aggregating user purchases.
  • An ad hoc internal reporting project is attached to the database.
  • An app is developed that connects the user to their customer account, requiring push notifications.

Can the database schema have anticipated all this? Not really. In any case, WordPress and WooCommerce do not apply many schema optimisations ‘out of the box’.

Possible solutions for database optimisation

Schema optimisation

The database application (MySQL or a derivative) will likely have a logging system and probably a slow query log, where many of the worst performing queries will be registered. The serious detective work begins here!

Often these slow log queries address the same, or similar, relationships between tables in the database: e.g. between the product and its properties, or an order and the user’s properties. Noting these types of slow queries, the database detective can employ a number of effective and thorough solutions:

  1. Archive or delete unused data – the less the database has to search, the quicker it will perform. 
  2. Add an index or composite index to one or more of the tables, targeted on particular fields. This is like using an index in a book – the reader can quickly see the usage, and on what pages, making ‘retrieval’ vastly quicker.
  3. If archiving or deletion is not an option, most databases will allow partitioning of tables. This means breaking up a table into several smaller sub-tables, which are quicker to search. The database application is smart enough to keep a record of what is kept in which table. So, instead of searching through millions of rows of data, it looks to (say) sub-table #3 and searches only a few thousand rows.

Some hosting companies may not be able to supply access to the slow query logs (especially on shared hosting), or prefer not to. In this situation a rule of thumb needs to be applied. The default ‘usual suspects’ tables should be tested to see if typical queries are improved by indexing techniques.

Hardware

The techniques above address deficiencies in the database schema itself. A more powerful or dedicated database server, or replicated databases, might also deliver performance improvements. However, without addressing the core problems (the slow queries) with a schema tuning, this could prove an expensive and unnecessary proposition.

Schema optimisation – an example

Fellowship recently noted, during database optimisation work, some slow queries in a WordPress e-commerce site where one table, the wp_postmeta table, had grown to around 6.5 million rows. This table stored much of the ancillary data for products (and much else). It was, therefore, used heavily whenever a product was referenced: on the product page, in a user’s order, or in the checkout process. As a result, some queries involving this table were taking seconds to resolve during busy site activity. With multiple products in an order, the checkout process compounded these inefficiencies and, in the worst cases, took 30 seconds or more. The customers were unimpressed.

First we identified two or three columns that were frequently used in WooCommerce queries of the product table and the meta table, and were also often identified as part of some ‘slow queries’. By adding a single and a composite index to these table columns we reduced the time involved from several seconds to a hundredth or even a thousandth of a second. This reduced the checkout experience just a few seconds and improved the overall process by over 60%.

Because the indexes affected all queries using those table columns, and those tables were core to WordPress and Woocommerce, the user experience browsing the e-commerce store improved considerably too. Pages loaded faster, even those unrelated to products. CPU usage was reduced, which meant the site could cope with more customers.

Secondly, we are considering partitioning the meta table with 6.5 million rows into a dozen or so sub-tables. This should boost performance again, perhaps as effectively as the indexing.

How long does this analysis take?

The most time consuming part of the analysis is identifying which queries are causing the slow responses and what indexing technique might improve them. 

Given a WordPress site with WooCommerce installed, aren’t these likely to be known already? Yes, there are ‘out of the box’ solutions that should improve most WordPress sites, but it would be unwise to expect the same results in every circumstance. WordPress and WooCommerce have so many plugins, and these will introduce quite different queries, which may or may not scale well. Some queries are very difficult to improve, but remain necessary to a particular plugin or process.

Having said that, the analysis might not take long at all.If the slow query criteria are simple,  a solution may be found in less than an hour. Initial testing would be done on a local development instance of the site, or perhaps a dedicated development server. 

Effecting an index (or partition), even on a large multi-million row table, usually only takes a few seconds. Even so, we would recommend altering the table only after a back-up process, then a short period where the database/e-commerce store is offline while the solution is applied. The changes are reversible too, should there be unexpected consequences.

It’s not simply a question of ‘fire & forget’ with this database optimisation. New plugins and functionality will add new queries and these may appear in the slow query log. Therefore regular monitoring is important. 

Are there any downsides to this database optimisation?

There is of course no free lunch. Adding indexes means using more server disk space. A composite index might even double the disk space used by a table and inserts, updates and deletions may become fractionally slower, as the index needs rewriting too. These writes are often so slight as to be unnoticeable, and are often more than offset by the read performance improvement. With batch update processes, this might be more noticeable.

Partitioning might only be beneficial with very large amounts of data in a table. It has several different methods of application, and some may slow write processes more than others. With the wp_postmeta table discussed above, there are really only one or two methods that are suitable (due to the nature of the WordPress schema) and these have minimal detrimental effects on write performance.

Is it necessary to repeat this database optimisation process?

Possibly! It really comes down to the morphology of the site. There may come a point where only a hardware upgrade (a new server, a separate database machine) will increase performance.

What does an audit involve?

Step one is access to the website database. A quick examination of the schema should highlight the low hanging fruit – tables using an older engine like MyISAM should be updated to InnoDB, and key tables not using any indexes should be noted.

Step two, if possible, would be access to MySQL’s slow query log. Depending on the hosting situation, this may not be possible. Having access to this makes it very clear which queries and which tables are problematic.

Step three would be to apply any database optimisation techniques to a local or development copy of the database, and test that the benefits are effective.

Finally, the destination site would need to be put into ‘maintenance’ mode (there are plugins that can present a holding page to prevent the usual access temporarily), a backup of the current database made, the SQL alterations executed, a test of the site to see there are no hiccups, and then all being well, removing the holding page and allowing public access again. Downtime could be as little as a quarter of an hour and can result in significant performance benefits.

Want us to optimise your WooCommerce website, quickly and cost effectively? Call Darren on 01284 830888, or email [email protected]

let’s work together