You certainly know how that Varnish is a very good caching solution but the major problem is you can't use it for SSL connections. Fortunately there is a solution called "Offload SSL" which decrypt the SSL, send it to the cache system and return crypted flow. This schema will help you more on understanding the purpose of it:
How the thing goes?
Let's start with the simplest thing: the non SSL traffic:
The client requests data to the Varnish server:
If Varnish has information -> it replies directly to the client
If Varnish doesn't have information:
It forwards connections to the Nginx in backend which reply to Varnish for caching
Send back results to the client
For the SSL traffic now:
The client request data to the Nginx Frontend with SSL
Nginx decrypt SSL traffic and forward the clear traffic to Varnish
Varnish check it's cache and decide to forward to the Nginx backend if data is not in cache
Nginx backend reply the required data to Varnish
The data in Varnish are sent back to the Nginx Frontend for SSL reencapsulation
Nginx Front end send the result to the client
Of course you don't need to have multiple machine to make it work. Here I'm using a single machine and a single Nginx instance listening on 2 different ports.
Installation
You need to have of course Nginx PHP-FPM and Varnish installed:
# This is a basic VCL configuration file for varnish. See the vcl(7)# man page for details on VCL syntax and semantics.## Default backend definition. Set this to point to your content# server.## Redirect to Nginx Backend if not in cachebackenddefault{
.host="127.0.0.1";
.port="8000";
}aclpurge{
"127.0.0.1";}# vcl_recv is called whenever a request is receivedsubvcl_recv{
if(req.restarts==0){
if(req.http.x-forwarded-for){
setreq.http.X-Forwarded-For=
req.http.X-Forwarded-For+", "+client.ip;
}else{
setreq.http.X-Forwarded-For=client.ip;
}}if(req.http.X-Real-IP){
setreq.http.X-Forwarded-For=req.http.X-Real-IP;
}else{
setreq.http.X-Forwarded-For=client.ip;
}# Serve objects up to 2 minutes past their expiry if the backend# is slow to respond.setreq.grace=120s;
setreq.backend=default;
if(!req.http.X-Forwarded-Proto){
setreq.http.X-Forwarded-Proto="http";
setreq.http.X-Forwarded-Port="80";
setreq.http.X-Forwarded-Host=req.http.host;
}# This uses the ACL action called "purge". Basically if a request to# PURGE the cache comes from anywhere other than localhost, ignore it.if(req.request=="PURGE")
{if(!client.ip~purge)
{error405"Not allowed.";}
return(lookup);}
# Pass any requests that Varnish does not understand straight to the backend.if(req.request!="GET"&&req.request!="HEAD"&&
req.request!="PUT"&&req.request!="POST"&&
req.request!="TRACE"&&req.request!="OPTIONS"&&
req.request!="DELETE")
{return(pipe);}/*Non-RFC2616orCONNECTwhichisweird.*/
# Pass anything other than GET and HEAD directly.if(req.request!="GET"&&req.request!="HEAD")
{return(pass);}/*WeonlydealwithGETandHEADbydefault*/
# Pass requests from logged-in users directly.if(req.http.Authorization||req.http.Cookie)
{return(pass);}/*Notcacheablebydefault*/
# Pass any requests with the "If-None-Match" header directly.if(req.http.If-None-Match)
{return(pass);}
# Force lookup if the request is a no-cache request from the client.if(req.http.Cache-Control~"no-cache")
{ban_url(req.url);}
return(lookup);
}subvcl_pipe{
# This is otherwise not necessary if you do not do any request rewriting.setreq.http.connection="close";
}# Called if the cache has a copy of the page.subvcl_hit{
if(req.request=="PURGE")
{ban_url(req.url);
error200"Purged";}
if(!obj.ttl>0s)
{return(pass);}
}# Called if the cache does not have a copy of the page.subvcl_miss{
if(req.request=="PURGE")
{error200"Not in cache";}
}# Called after a document has been successfully retrieved from the backend.subvcl_fetch{
setberesp.grace=120s;
if(beresp.ttl<48h){
setberesp.ttl=48h;}
if(!beresp.ttl>0s)
{return(hit_for_pass);}
if(beresp.http.Set-Cookie)
{return(hit_for_pass);}
if(req.http.Authorization&&!beresp.http.Cache-Control~"public")
{return(hit_for_pass);}
}subvcl_pass{
return(pass);
}subvcl_hash{
hash_data(req.url);
if(req.http.host){
hash_data(req.http.host);
}else{
hash_data(server.ip);
}return(hash);
}subvcl_deliver{
# Debugremoveresp.http.Via;
removeresp.http.X-Varnish;
# Add a header to indicate a cache HIT/MISSif(obj.hits>0){
setresp.http.X-Cache="HIT";
setresp.http.X-Cache-Hits=obj.hits;
setresp.http.X-Age=resp.http.Age;
removeresp.http.Age;
}else{
setresp.http.X-Cache="MISS";
}return(deliver);
}subvcl_error{
setobj.http.Content-Type="text/html; charset=utf-8";
setobj.http.Retry-After="5";
synthetic{" <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTDXHTML1.0Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>"}+obj.status+" "+obj.response+{"</title> </head> <body> <h1>Error "}+obj.status+" "+obj.response+{"</h1> <p>"}+obj.response+{"</p> <h3>Guru Meditation:</h3> <p>XID: "}+req.xid+{"</p> <hr> <p>Varnish cache server</p> </body> </html> "};
return(deliver);
}subvcl_init{
return(ok);
}subvcl_fini{
return(ok);
}
Testing
The testing part is should be applied in the good order. For my personal usages, I waste too much time because I didn't made the proper checks before. The things to check should be in that order:
Check Nginx backend on port 8000
Check Varnish access on port 80
Check SSL Nginx frontend on port 443
Add this index in your vhost server to get your header informations directly on the page: