Difference between revisions of "User:TerryE/Traffic Server Configuration"

From Apache OpenOffice Wiki
Jump to: navigation, search
Line 234: Line 234:
The Apache web server default logging format would only list as the connecting address. Hence and extra "cached" logging option is enabled<ref>http://httpd.apache.org/docs-2.2/mod/mod_log_config.html</ref>, and this captures the originating browser's address by using the "x-forwarded-for" header passed by Traffic Server.  
The Apache web server default logging format would only list as the connecting address. Hence and extra "cached" logging option is enabled<ref>http://httpd.apache.org/docs-2.2/mod/mod_log_config.html</ref>, and this captures the originating browser's address by using the "x-forwarded-for" header passed by Traffic Server.  
<tt>LogFormat "%{X-Forwarded-for}i&nbsp;%l&nbsp;%u&nbsp;%t \"%r\"&nbsp;%&gt;s&nbsp;%b \"%{Referer}i\" \"%{User-Agent}i\"" cached CustomLog /var/log/apache2/access.log cached</tt>
<tt>LogFormat "%{X-Forwarded-for}i&nbsp;%l&nbsp;%u&nbsp;%t \"%r\"&nbsp;%&gt;s&nbsp;%b \"%{Referer}i\" \"%{User-Agent}i\"" cached <br>CustomLog /var/log/apache2/access.log cached</tt>
== See also ==
== See also ==

Revision as of 01:34, 22 August 2011

Why Traffic Server?

Apache Traffic Server is a lightweight, yet high-performance, web proxy cache that improves network efficiency and performance[1] . Like Squid and Varnish, Traffic Server can be configured as a reverse proxy[2]. In this mode, it acts as a full surrogate for the back-end wiki with port 80 on the advertised hostname for the wiki resolving to Traffic Server. In doing so this enables the processing of web requests to be offloaded from the PHP and database intensive MediaWiki application.

Traffic Server can be configure to store high frequency cached content in memory, and where content is flush to disk, access will still invovle significantly less physical I/O than the MediaWiki application. Hence permitting a significantly higher throughput for a give CPU and I/O resource constraint. MediaWiki id been designed to integrate closely with such web cache packages and will Traffic Server when a page should be purged from the cache in order to be regenerated. From MediaWiki's point of view, a correctly-configured Traffic Server installation is interchangeable with Squid or Varnish.

The architecture

An example setup of Traffic Server, Apache and MediaWiki on a single server is outlined below. A more complex caching strategy may use multiple web servers behind the same Traffic Server caches (all of which can be made to appear to be a single host) or use independent servers to deliver wiki or image content.

Outside world <--->


Traffic Server accelerator


Apache webserver

To the outside world, Traffic Server appears to act as the web server. In reality it passes on requests to the Apache web server, but only when necessary. An Apache running on the same server only listens to requests from localhost ( while Traffic Server only listens to requests on the server's external IP address. Both services run on port 80 without conflict as each is bound to different IP addresses.

Configuring Traffic Server 2.x

/etc/sysconfig/Traffic Server

This is the first configuration file loaded by Traffic Server on startup. It specifies the amount of memory to be allocated to the Traffic Server cache, the location of the main (*.vcl) configurations and the specific IP addresses to which Traffic Server must respond.

The remainder of the configuration data, including the address of the backend server(s), is listed in the main *.vcl file - not here.

# Needs to go in here

A sample XXXX

The address(es) of the backend server(s) must be specified here. In a simple installation, one server (localhost) is sufficient. Larger sites may operate multiple wiki or image servers behind a single Traffic Server cache[3]:

# set default backend if no server cluster specified
backend default {
        .host = "localhost";
        .port = "80"; }
# create a round-robin director: "apaches" uses roberto and sophia as backend servers.
director apaches round-robin {
  { .backend = { .host = "wiki1"; .port = "80"; } }
  { .backend = { .host = "wiki2"; .port = "80"; } } }
# access control list for "purge": open to only localhost and other local nodes
acl purge {

If more than one backend webserver is available, a list of servers to be used may be selected here on a per-domain basis. This could allow multiple, relatively powerful servers to be used to respond to wiki page text requests while requests for static images are handled on a local web server. A simple one-server installation would simply pass all unhandled requests to the default web server.

Any requests other than a simple 'get' will be passed directly through to the web server, along with all requests from logged-in users.

Most common browsers do support compression (gzip or zip) of returned pages. While Traffic Server itself performs no compression, it is configured here to store separate copies of a page depending on whether the user's browser supports compression.[4] If a browser accepts both gzip and zip (deflate), the gzip version of the page is served as it is smaller and therefore slightly quicker to display. The browser's reported capabilities are checked here and the gzip'ped version of pages is served wherever possible.

# vcl_recv is called whenever a request is received 
sub vcl_recv {
        # Serve objects up to 2 minutes past their expiry if the backend
        # is slow to respond.
        set req.grace = 120s;
        # Use our round-robin "apaches" cluster for the backend.
        if (req.http.host ~ "^images.example.org$") 
           {set req.backend = default;}
           {set req.backend = apaches;}
        # This uses the ACL action called "purge". Basically if a request to
        # PURGE the cache comes from anywhere other than localhost, ignore it.
        if (req.request == "PURGE") 
            {if (!client.ip ~ purge)
                {error 405 "Not allowed.";}
        # Pass any requests that Traffic Server does not understand straight to the backend.
        if (req.request != "GET" && req.request != "HEAD" &&
            req.request != "PUT" && req.request != "POST" &&
            req.request != "TRACE" && req.request != "OPTIONS" &&
            req.request != "DELETE") 
            {pipe;}     /* Non-RFC2616 or CONNECT which is weird. */
        # Pass anything other than GET and HEAD directly.
        if (req.request != "GET" && req.request != "HEAD")
           {pass;}      /* We only deal with GET and HEAD by default */
        # Pass requests from logged-in users directly.
        if (req.http.Authorization || req.http.Cookie)
           {pass;}      /* Not cacheable by default */
        # Pass any requests with the "If-None-Match" header directly.
        if (req.http.If-None-Match)
        # Force lookup if the request is a no-cache request from the client.
        if (req.http.Cache-Control ~ "no-cache")
        # normalize Accept-Encoding to reduce vary
        if (req.http.Accept-Encoding) {
          if (req.http.User-Agent ~ "MSIE 6") {
            unset req.http.Accept-Encoding;
          } elsif (req.http.Accept-Encoding ~ "gzip") {
            set req.http.Accept-Encoding = "gzip";
          } elsif (req.http.Accept-Encoding ~ "deflate") {
            set req.http.Accept-Encoding = "deflate";
          } else {
            unset req.http.Accept-Encoding;

Traffic Server must pass the user's IP address as part of the 'x-forwarded-for' header, so that MediaWiki may be configured to display the user's address in MWwiki:special:recentchanges instead of Traffic Server's local IP address.

sub vcl_pipe {
        # Note that only the first request to the backend will have
        # X-Forwarded-For set.  If you use X-Forwarded-For and want to
        # have it set for all requests, make sure to have:
        # set req.http.connection = "close";
        # This is otherwise not necessary if you do not do any request rewriting.
        set req.http.connection = "close";

Traffic Server must be configured to allow a PURGE request from MediaWiki, instructing the cache to discard stored copies of pages which have been modified by user edits. These requests normally originate only from wiki servers within the local site.

If the page is not in the cache, a 200 (success) code is still returned as the objective is to remove the outdated page from the cache.

# Called if the cache has a copy of the page.
sub vcl_hit {
        if (req.request == "PURGE") 
            error 200 "Purged";}
        if (!obj.cacheable)
# Called if the cache does not have a copy of the page.
sub vcl_miss {
        if (req.request == "PURGE") 
           {error 200 "Not in cache";}

The web server may set default expiry times for various objects. As MediaWiki will indicate (via a PURGE request) when a page has been edited and therefore needs to be discarded from cache, the Apache-reported defaults for expiry time are best ignored or replaced with a significantly-longer expiry time.

Pages served to logged-in users (identified by MediaWiki setting browser cookies) or which require passwords to access are never cached.

In this example, the 'no-cache' flag is being ignored on pages served to anonymous-IP users. Such measures normally are only needed if a wiki is making extensive use of extensions which add this flag indiscriminately (such as a wiki packed with random <choose>/<option> Algorithm tags on the main page and various often-used templates).

# Called after a document has been successfully retrieved from the backend.
sub vcl_fetch {
        # set minimum timeouts to auto-discard stored objects
#       set beresp.prefetch = -30s;
        set beresp.grace = 120s;
        if (beresp.ttl < 48h) {
          set beresp.ttl = 48h;}
        if (!beresp.cacheable) 
        if (beresp.http.Set-Cookie) 
#       if (beresp.http.Cache-Control ~ "(private|no-cache|no-store)") 
#           {pass;}
        if (req.http.Authorization && !beresp.http.Cache-Control ~ "public") 

Configuring MediaWiki

Since Traffic Server is captures the end-user browser requests and forwards those which requre processing by Apache through the localhost loopback connector, Apache will alsways receive "" as the direct remote address. However, as Traffic Server forwards requests to Apache, it is configured to add the "X-Forwarded-For" header so that the remote address from the outside world is preserved. MediaWiki must be configured to use the "X-Forwarded-For" header in order to correctly display user addresses in Special:RecentChanges.

The required configuration for Traffic Server is essentially the same as for Squid, with the following config assignments in LocalSettings.php:

$wgUseSquid = true;
$wgSquidServers = array('');
// $wgInternalServer = '';           // Internal server name as known to Squid. NOT SET.
// $wgMaxSquidPurgeTitles = 0        // Maximum no of pages to purge in one client operation. NOT SET.
// $wgSquidMaxage =  Cache timeout for the squid.
$wgUseXVO = true;                    // Send X-Vary-Options header for better caching.
$wgDisableCounters = true;           // Disable collection of Page counters
$wgShowIPinHeader = false;           // Disable display of IP for guests as this frustrates caching

These settings serve two main purposes:

  • If a request is received from the Traffic Server cache server, the MediaWiki logs need to display the IP address of the user, not that of Traffic Server. A Special:RecentChanges in which every edit is reported as '' isn't meaningful. Listing this address in $wgSquidServers lets the application know that the user IP address should be obtained from the 'x-forwarded-for' header.
  • Whenever a page or file is modified on the wiki, MediaWiki must be configured to send Purge notification to any caches which serve its content. $wgSquidServers contains the list of such servers. (The name is misleading. Squid was the first cache supported by MediaWiki.)

Note that the configuration is already tuned to support PHP APC acceration for both MediaWiki code, and metadata caching.

Outstanding issues

  • Logging and Page Stats. Most inbound request will be handled by the Traffic Server cache, so the internal stats collected by MediaWiki will only reflect cache misses. We need to think about how we handle logfile analysis and stats in general. I have turned off page counters as these will only reflect cache misses in future.
  • Decision to retain a version MediaWiki 1.15 baseline. For MediaWiki v1.16.x and later, internationalisation can add a material D/B load, For this and other schema changes, we've decided to stick with the last stable MW 1.15.x version (1.15.6) as the S/W baseline

Apache configuration

The Apache server is configured to listen on the standard port at the localhost IP, and accepts all requests from Traffic Server:


The Apache web server default logging format would only list as the connecting address. Hence and extra "cached" logging option is enabled[5], and this captures the originating browser's address by using the "x-forwarded-for" header passed by Traffic Server.

LogFormat "%{X-Forwarded-for}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" cached
CustomLog /var/log/apache2/access.log cached

See also


  1. http://trafficserver.apache.org/
  2. http://trafficserver.apache.org/docs/v2/admin/reverse.htm
  3. http://wikia.googlecode.com/svn/utils/Traffic Serverhtcpd/mediawiki.vcl
  4. http://Traffic Server.projects.linpro.no/wiki/FAQ/Compression
  5. http://httpd.apache.org/docs-2.2/mod/mod_log_config.html
Personal tools