Nginx Fastcgi Reverse Proxy Cache for PHP (Symfony)

Nginx Fastcgi Reverse Proxy Cache for PHP (Symfony)Stefan PöltlBlockedUnblockFollowFollowingJun 23If you ever faced the case that your PHP app is not performing for the incoming traffic, Nginx can help you.

Actually there is a easy caching fix for PHP backends to reduce the load and to serve responses faster that are requested multiple times with the same parameters.

Current application stateIn my case I had to fix a legacy API route that generated a JSON based response which took around 400ms.

The app is using Memcached to cache serialized objects, but still the response time for an API should be much faster.

The code itself was kind of legacy and a fast shot to speed up the application was requested.

SolutionThe application lives behind a load balancer and runs on mulitple nodes with the following components:PHP-FPM as PHP Process ManagerNGINX as webserverThe simplest solution that came to my mind was to cache the API responses, triggered by a POST request, even if the RFC for HTTP says its not recommended:Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields.

However,the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource.

ImplementationThe best thing is, you don’t need to touch the application at all, you can just change your nginx configuration file and that’s it.

I will show a configuration file you can easily mount into a NGINX Docker container and run it in front of a PHP-FPM container.

docker-compose.

yml:version: '2'services: web: image: nginx:latest ports: – "8080:80" volumes: – .

/:/opt/code – .

/vhost:/etc/nginx/conf.

d/default.

conf depends_on: – fpm fpm: image: php:7.

3-fpm volumes: – .

/:/opt/codeNginx vhost config:fastcgi_ignore_headers Cache-Control Expires Set-Cookie;fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=fpmcache:100m max_size=3g inactive=60m use_temp_path=off;fastcgi_cache_key "$request_uri|$request_body";fastcgi_cache_methods POST;server { server_tokens off; listen 80 default_server; server_name server_name _; access_log /dev/stdout; error_log /dev/stderr; root /opt/code; location / { try_files $uri /index.

php$is_args$args; } set $disable_cache 1; if ($request_uri ~ "^/api") {set $disable_cache 0;} location ~ [^/].

php(/|$) { fastcgi_cache_bypass $disable_cache; fastcgi_no_cache $disable_cache; fastcgi_cache fpmcache; fastcgi_cache_valid 200 20m; fastcgi_cache_min_uses 1; fastcgi_cache_lock on; add_header X-Cache $upstream_cache_status; fastcgi_pass fpm:9000; fastcgi_split_path_info ^(.

+.

php)(/.

*)$; fastcgi_read_timeout 35; include fastcgi_params; fastcgi_param DOCUMENT_ROOT $realpath_root; fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name; }}index.

php:<?phpecho date('Y-m-d H:i:s');If you spin up the containers with docker-compose up and run the following curl statement multiple times, you should see the same response output for 20 minutes, as configured in the NGINX vhost config.

curl -X POST localhost:8080/apiCached response output for curl requestAs you can see for the /api POST request, we always get the same time exposed from the cache after the first PHP script call was saved on disk in the NGINX container.

For all other calls you get the current time forwarded from the backend without caching:Non cached response outputHow did we configure NGINX?fastcgi_ignore_headers Cache-Control Expires Set-Cookie;This line is necessary, because Frameworks like Symfony set the Cache-Control: no-cache header by default and the configured caching will be ignored.

fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=fpmcache:100m max_size=3g inactive=60m use_temp_path=off;Here the arguments configure the path to store the cached backend results and set the directory structure/depth.

This means the cache folder structure looks like this: /var/cache/nginx/5/dc for the key: 4275fe6d0b92cebe0f8d1461c5fe2dc5So the directory structure is build like: Last char from the key/next two chars/keyThe reason for this is, that nginx has less cache files per folder instead of storing all files in one directory, to speed up the lookup by key.

The file content of the key named file is:KEY: /api|VX-Powered-By: PHP/7.

3.

6Content-type: text/html; charset=UTF-82019–06–13 20:46:28In general NGINX stores the keys in memory(RAM) with 100MB in our case and keeps a 3GB storage on disk for the cache files.

If a key was not fetched for 60 minutes the file gets removed.

Further we disable the first write of the generated cached result into a temp folder, for performance reasons.

Next step is to decide which values are used to compute the cache key and which HTTP methods are valid to enable the caching:fastcgi_cache_key "$request_uri|$request_body";fastcgi_cache_methods POST;In the next step, we set a variable that enables the caching only for the /api route and use it in the php location block:set $disable_cache 1;if ($request_uri ~ "^/api") {set $disable_cache 0;}location ~ [^/].

php(/|$) { fastcgi_cache_bypass $disable_cache; fastcgi_no_cache $disable_cache;}Finally we configure which defined cache is used and which HTTP return codes get cached for how long(fastcgi_cache_valid).

The fastcgi_min_uses setting caches the result from the backend after the first request was processed.

The fastcgi_cache_lock option just lets the first request hitting the backend server, all the others are going to wait until the result is in the cache and fetch it from there.

This prevents load on the backend.

To debug the current caching status we add the X-Cache header, which says MISS, HIT or EXPIRED.

fastcgi_cache fpmcache;fastcgi_cache_valid 200 20m;fastcgi_cache_min_uses 1;fastcgi_cache_lock on;add_header X-Cache $upstream_cache_status;At the end don’t forget to test your hosts configuration with nginx -t and serve your users/clients with a fast API.

.. More details

Leave a Reply