Log Files on Pantheon

Use log files to identify errors, track response times, analyze visitors and more on your WordPress or Drupal site.

Discuss in our Forum Discuss in Slack

Log files track and record your site's activity to help you find, debug, and isolate current or potential problems on your site. Each environment (Multidev, Dev, Test, and Live) has their own respective log files, which can be obtained via SFTP. Application-level logs can be accessed through Drupal directly. In addition to logs, New Relic® Performance Monitoring is a great way to help diagnose and fix errors and performance bottlenecks.

The server timezone and all log timestamps are in UTC (Coordinated Universal Time).

Available Logs

LogRetention PolicyComments
newrelic.logNew Relic log; check if an environment is not logging.
nginx-access.logUp to 60 days of logsWeb server access log. Do not consider canonical, as this will be wiped if the application container is reset or rebuilt. See Parsing nginx Access Logs with GoAccess.
nginx-error.log1MB of log dataWeb server error log.
php-error.log 1MB of log dataPHP fatal error log; will not contain stack overflows. Fatal errors from this log are also shown in the Dashboard.
php-fpm-error.log1MB of log dataPHP-FPM generated collection of stack traces of slow executions, similar to MySQL's slow query log. See PHP Slow Log
mysqld-slow-query.log10MB of log dataLog of MySQL queries that took more than 120 seconds to execute. Located in the database's logs/ directory.
mysqld.log1MB of log dataLog of established MySQL client connections and statements received from clients. Also Located in the database's logs/ directory.
mysql-bin.0001MySQL binary logs. Located in the database's data/ directory.

Rotated log files are archived within the /logs directory on application containers and database servers.

You may find that this directory contains sub-directories for services like Nginx and PHP (e.g. /logs/nginx/nginx-access.log-20160617.gz or /logs/php/php-error.log-20160617.gz) or log files directly in logs (e.g. /logs/mysqld-slow-query.log-20160606).

 Note

When appservers are migrated as a regular part of platform maintenance, log files are destroyed as they are appserver-specific. Consider automating the collection of logs regularly to maintain historical log data.

Access Logs Via SFTP

Logs are stored within application containers that house your site's codebase and files. Add an SSH key within your User Dashboard to enable passwordless access and avoid authentication prompts. Otherwise, provide your Pantheon Dashboard credentials when prompted.

In the Connection Information section of the dashboard, we can see a pattern about the hostnames:

<env>.<site-uuid>@<type>.<env>.<site-uuid>.drush.in
TypeEnvSite UUID
appserverdev, test, live, <multidev-env>ex. c5c75825-5cd4-418e-8cb0-fb9aa1a7f671, as found in https://dashboard.pantheon.io/sites/<site-uuid>
dbserver

Downloading Logs

Application Log Files

  1. Access the Site Dashboard and desired environment (Multidev, Dev, Test, or Live).

  2. Click Connection Info and copy the SFTP Command Line command.

  3. Open a terminal window and paste the SFTP connection command.

  4. Run the following SFTP command in terminal:

    get -r logs

You now have a local copy of the logs directory.

The directory structure will resemble:

├── logs
    └──php
        └──newrelic.log
        └──php-error.log
        └──php-fpm-error.log
        └──php-slow.log
    └──nginx
        └──nginx-access.log
        └──nginx-error.log

Database Log Files

  1. Access the Site Dashboard and desired environment (Multidev, Dev, Test, or Live).

  2. Click Connection Info and copy the SFTP Command Line command.

  3. Edit and execute the command by replacing appserver with dbserver:

    From:

    sftp -o Port=2222 dev.de305d54-75b4-431b-adb2-eb6b9e546014@appserver.dev.de305d54-75b4-431b-adb2-eb6b9e546014.drush.in

    To:

    sftp -o Port=2222 dev.de305d54-75b4-431b-adb2-eb6b9e546014@dbserver.dev.de305d54-75b4-431b-adb2-eb6b9e546014.drush.in
  4. Run the following SFTP command in terminal:

    get -r logs

You now have a local copy of the logs directory, which contains the following:

├── logs
    └──mysqld-slow-query.log
    └──mysqld.log

Automate Downloading Logs

Automate the process of accessing and maintaining these logs with a script.

Create a Script

Open your local terminal to create and access a new local directory:

mkdir $HOME/site-logs
cd $HOME/site-logs

Choose your preferred method from the following tabs, then click the Download button to download the script. Move it to the site-logs directory you created, and use your favorite text editor to edit collect-logs.sh and replace the xxxxxxx with the appropriate site UUID and environment.

The resulting log file might be large.

The script provides several modifiable variables described in its comments:

collect-logs-rsync.sh
collect-logs-rsync.sh
#!/bin/bash
# Site UUID is REQUIRED: Site UUID from Dashboard URL, e.g. 12345678-1234-1234-abcd-0123456789ab
SITE_UUID=xxxxxxx
# Environment is REQUIRED: dev/test/live/or a Multidev
ENV=xxxxxxx

########### Additional settings you don't have to change unless you want to ###########
# OPTIONAL: Set AGGREGATE_NGINX to true if you want to aggregate nginx logs.
#  WARNING: If set to true, this will potentially create a large file
AGGREGATE_NGINX=false
# if you just want to aggregate the files already collected, set COLLECT_LOGS to FALSE
COLLECT_LOGS=true
# CLEANUP_AGGREGATE_DIR removes all logs except combined.logs from aggregate-logs directory.
CLEANUP_AGGREGATE_DIR=false


if [ $COLLECT_LOGS == true ]; then
echo 'COLLECT_LOGS set to $COLLECT_LOGS. Beginning the process...'
for app_server in $(dig +short -4 appserver.$ENV.$SITE_UUID.drush.in);
do
    rsync -rlvz --size-only --ipv4 --progress -e "ssh -p 2222" "$ENV.$SITE_UUID@$app_server:logs" "app_server_$app_server"
done

# Include MySQL logs
for db_server in $(dig +short -4 dbserver.$ENV.$SITE_UUID.drush.in);
do
    rsync -rlvz --size-only --ipv4 --progress -e "ssh -p 2222" "$ENV.$SITE_UUID@$db_server:logs" "db_server_$db_server"
done
else
echo 'skipping the collection of logs..'
fi

if [ $AGGREGATE_NGINX == true ]; then
echo 'AGGREGATE_NGINX set to $AGGREGATE_NGINX. Starting the process of combining nginx-access logs...'
mkdir aggregate-logs

for d in $(ls -d app*/logs/nginx); do
    for f in $(ls -f "$d"); do
    if [[ $f == "nginx-access.log" ]]; then
        cat "$d/$f" >> aggregate-logs/nginx-access.log
        cat "" >> aggregate-logs/nginx-access.log
    fi
    if [[ $f =~ \.gz ]]; then
        cp -v "$d/$f" aggregate-logs/
    fi
    done
done

echo "unzipping nginx-access logs in aggregate-logs directory..."
for f in $(ls -f aggregate-logs); do
    if [[ $f =~ \.gz ]]; then
    gunzip aggregate-logs/"$f"
    fi
done

echo "combining all nginx access logs..."
for f in $(ls -f aggregate-logs); do
    cat aggregate-logs/"$f" >> aggregate-logs/combined.logs
done
echo 'the combined logs file can be found in aggregate-logs/combined.logs'
else
echo "AGGREGATE_NGINX set to $AGGREGATE_NGINX. So we're done."
fi

if [ $CLEANUP_AGGREGATE_DIR == true ]; then
echo 'CLEANUP_AGGREGATE_DIR set to $CLEANUP_AGGREGATE_DIR. Cleaning up the aggregate-logs directory'
find ./aggregate-logs/ -name 'nginx-access*' -print -exec rm {} \;
fi

View on GitHub

collect-logs-sftp.sh
collect-logs-sftp.sh
#!/bin/bash
# Site UUID is REQUIRED: Site UUID from Dashboard URL, e.g. 12345678-1234-1234-abcd-0123456789ab
SITE_UUID=xxxxxxx
# Environment is REQUIRED: dev/test/live/or a Multidev
ENV=xxxxxxx

########### Additional settings you don't have to change unless you want to ###########
# OPTIONAL: Set AGGREGATE_NGINX to true if you want to aggregate nginx logs.
#  WARNING: If set to true, this will potentially create a large file
AGGREGATE_NGINX=false
# if you just want to aggregate the files already collected, set COLLECT_LOGS to FALSE
COLLECT_LOGS=true
# CLEANUP_AGGREGATE_DIR removes all logs except combined.logs from aggregate-logs directory.
CLEANUP_AGGREGATE_DIR=false


if [ $COLLECT_LOGS == true ]; then
echo 'COLLECT_LOGS set to $COLLECT_LOGS. Beginning the process...'
for app_server in $(dig +short -4 appserver.$ENV.$SITE_UUID.drush.in);
do
    echo "get -R logs \"app_server_$app_server\"" | sftp -o Port=2222 "$ENV.$SITE_UUID@$app_server"
done

# Include MySQL logs
for db_server in $(dig +short -4 dbserver.$ENV.$SITE_UUID.drush.in);
do
    echo "get -R logs \"db_server_$db_server\"" | sftp -o Port=2222 "$ENV.$SITE_UUID@$db_server"
done
else
echo 'skipping the collection of logs..'
fi

if [ $AGGREGATE_NGINX == true ]; then
echo 'AGGREGATE_NGINX set to $AGGREGATE_NGINX. Starting the process of combining nginx-access logs...'
mkdir aggregate-logs

for d in $(ls -d app*/logs/nginx); do
    for f in $(ls -f "$d"); do
    if [[ $f == "nginx-access.log" ]]; then
        cat "$d/$f" >> aggregate-logs/nginx-access.log
        cat "" >> aggregate-logs/nginx-access.log
    fi
    if [[ $f =~ \.gz ]]; then
        cp -v "$d/$f" aggregate-logs/
    fi
    done
done

echo "unzipping nginx-access logs in aggregate-logs directory..."
for f in $(ls -f aggregate-logs); do
    if [[ $f =~ \.gz ]]; then
    gunzip aggregate-logs/"$f"
    fi
done

echo "combining all nginx access logs..."
for f in $(ls -f aggregate-logs); do
    cat aggregate-logs/"$f" >> aggregate-logs/combined.logs
done
echo 'the combined logs file can be found in aggregate-logs/combined.logs'
else
echo "AGGREGATE_NGINX set to $AGGREGATE_NGINX. So we're done."
fi

if [ $CLEANUP_AGGREGATE_DIR == true ]; then
echo 'CLEANUP_AGGREGATE_DIR set to $CLEANUP_AGGREGATE_DIR. Cleaning up the aggregate-logs directory'
find ./aggregate-logs/ -name 'nginx-access*' -print -exec rm {} \;
fi

View on GitHub

Collect Logs

Download logs by executing the script from within the site-logs directory:

bash collect-logs.sh

You can now access the logs from within the site-logs directory. More than one directory is generated for sites that use multiple application containers.

Frequently Asked Questions

How can I parse my Nginx access logs?

See Parsing nginx Access Logs with GoAccess for details.

What is the first line in nginx-access.log?

The first entry reflects an internal IP address of Pantheon's routing layer. The last entry provides a list of IPs used to serve the request, starting with the client IP and ending with internal IPs from the routing layer. For environments with HTTPS enabled, the load balancer IP address will be listed second, after the client IP.

The client IP for the following example is 122.248.101.126:

203.0.113.56 - - [19/Feb/2016:02:00:00 +0000]  "GET /edu HTTP/1.1" 200 13142 "https://pantheon.io/agencies/pantheon-for-agencies" "Mozilla/5.0 (Windows NT 6.3; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0" 0.399 "122.248.101.126, 50.57.202.75, 10.x.x.x, 10.x.x.x"

Can I log to the system logger and access syslog?

No, syslog is not available. Technically, you can log Drupal events using the syslog module, but you won't be able to read or access them. You can use the error_log function to log to the php-error.log, which is accessible in the logs directory.

Can I access Apache Solr logs?

No, access to Apache Solr logs is not available. For more information on debugging Solr, refer to the documentation on Pantheon Search.

Can I download Varnish logs?

No, Varnish logs are not available for download.

How do I enable error logging for WordPress?

 Warning

The steps in this section enable debug logging. Debug logging increases resource overhead and presents a security risk. It is not recommended for production environments.

To minimize risk exposure, especially in a Live environment, disable debug logging when you are done.

Enable the WP_DEBUG and WP_DEBUG_LOG constants on Development environments (Dev and Multidevs) to write errors to wp-content/uploads/debug.log and show all PHP errors, notices, and warnings on the page. We suggest setting the WordPress debugging constants per environment in wp-config.php:

wp-config.php
// All Pantheon Environments.
if (defined('PANTHEON_ENVIRONMENT')) {
  // Turns on WordPress debug settings in development and multidev environments, and disables in test and live.
  if (!in_array(PANTHEON_ENVIRONMENT, array('test', 'live'))) {
    // Debugging enabled.
    if (!defined('WP_DEBUG')) {
      define( 'WP_DEBUG', true );
    }
    if (!defined('WP_DISABLE_FATAL_ERROR_HANDLER')) {
      define( 'WP_DISABLE_FATAL_ERROR_HANDLER', true ); // 5.2 and later
    }
   if (!defined('WP_DEBUG_DISPLAY')) {
      define( 'WP_DEBUG_DISPLAY', true ); // requires WP_DISABLE_FATAL_ERROR_HANDLER set to true
    }
    define( 'WP_DEBUG_LOG', __DIR__ . '/wp-content/uploads/debug.log' ); // Moves the log file to a location writable while in git mode. Only works in WP 5.1
  }
  // WordPress debug settings in Test and Live environments.
  else {
    // Debugging disabled.
    ini_set( 'log_errors','Off');
    ini_set( 'display_errors','Off');
    ini_set( 'error_reporting', E_ALL );
    define( 'WP_DEBUG', false);
    define( 'WP_DEBUG_LOG', false);
    define( 'WP_DISABLE_FATAL_ERROR_HANDLER', false );
    define( 'WP_DEBUG_DISPLAY', false);
  }
}

By default, the WordPress debug log path is set to /wp-content/ and is not writable on Test or Live environments. This can be overridden to the /wp-content/uploads/ folder.

How can I access the Drupal event log?

By default, Drupal logs events using the Database Logging module (dblog). PHP fatal errors can sometimes be found in these logs, depending on how much Drupal bootstrapped. You can access the event logs in a couple ways:

  • Visit /admin/reports/dblog once you've logged in as administrator.

  • Using Terminus:

    terminus drush <site>.<env> -- watchdog-show
  • Terminus can invoke Drush commands to "watch" events in real-time; --tail can be used to continuously show new watchdog messages until interrupted (Control+C).

    terminus drush <site>.<env> -- watchdog-show --tail

My Drupal database logs are huge. Should I disable dblog?

We do not recommend disabling dblog. Best practice is to find and resolve the problems. PHP notices, warnings, and errors mean more work for PHP, the database, and your site. If your logs are filling up with PHP messages, find and eliminate the root cause of the problems. The end result will be a faster site.

How do I access logs in environments with multiple containers?

Live environments for Basic and Performance sites on paid plans have one main and one failover container that can contain logs. Performance Medium plans and above have more than one container in the Live and Test environments. In order to download the logs from each application container, use the shell script above.

Can I tail server logs?

Not directly. You can download your logs locally using SFTP then review them with any tool on your workstation.

You can also create the logwatcher.sh script below, which uses Terminus and the Terminus Rsync Plugin to download log files and display the last several lines.

  1. If you're working on multiple projects locally, create a logs directory in the local Git repository for each one you want to watch logs for.

  2. Add logs/* to the project's .gitignore file.

  3. In your project's logs directory, create logwatcher.sh:

    logwatcher.sh
    #!/bin/bash
    export TERMINUS_HIDE_UPDATE_MESSAGE=1
    
    LOGPATH=~/projects/mysite/logs/
    LOGFILE=php-error.log
    SITE=sitename
    ENV=environment
    
    touch $LOGPATH/$LOGFILE
    terminus rsync $SITE.$ENV:logs/php/$LOGFILE $LOGPATH
    
    tail $LOGPATH/$LOGFILE
  4. Update the variables:

    • LOGPATH points to the logs directory in your project,
    • SITE should match your site name,
    • ENV is the environment you want to watch logs from
  5. Make the script executable:

    chmod +x ~/projects/mysite/logs/logwatcher.sh
  6. Now you can use watch (available on macOS via Homebrew), to keep an updated view of the logs:

    watch -n2 ~/projects/mysite/logs/logwatcher.sh

    Stop the process with CTRL-C.

See Also