Squid is a proxy server and web cache daemon. It has a wide variety of uses, from speeding up a web server by caching repeated requests, to caching web, DNS and other computer network lookups for a group of people sharing network resources, to aiding security by filtering traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other protocols including TLS, SSL, Internet Gopher and HTTPS. The development version of Squid (3.1) includes IPv6 and ICAP support.

Squid has been developed for many years. Early work on the program was completed at the University of California, San Diego and funded via two grants from the National Science Foundation. Squid is now developed almost exclusively through volunteer efforts.

Squid is primarily designed to run on Unix-like systems but it also runs on Windows-based systems. Released under the GNU General Public License, Squid is free software.

Web Proxy

Caching is a way to store requested Internet objects (e.g. data like web pages) available via the HTTP, FTP, and Gopher protocols on a system closer to the requesting site. Web browsers can then use the local Squid cache as a proxy HTTP server, reducing access time as well as bandwidth consumption. This is often useful for Internet service providers to increase speed to their customers, and LANs that share an Internet connection. Because it is also a proxy (i.e. it behaves like a client on behalf of the real client), it can provide some anonymity and security. However, it also can introduce significant privacy concerns as it can log a lot of data including URLs requested, the exact date and time, the name and version of the requester’s web browser and operating system, and the referer.

A client program (e.g. browser) either has to explicitly specify the proxy server it wants to use (typical for ISP customers), or it could be using a proxy without any extra configuration: “transparent caching”, in which case all outgoing HTTP requests are intercepted by Squid and all responses are cached. The latter is typically a corporate set-up (all clients are on the same LAN) and often introduces the privacy concerns mentioned above.

Squid has some features that can help anonymize connections, such as disabling or changing specific header fields in a client’s HTTP requests. Whether these are set, and what they are set to do, is up to the person who controls the computer running Squid. People requesting pages through a network which transparently uses Squid will usually have no idea if this information is being logged.

Reverse Proxy

The above set-up—caching the contents of an unlimited number of webservers for a limited number of clients—is the classical one. Another set-up is “reverse proxy” or “webserver acceleration” (using http_port 80 accel vhost). In this set-up, the cache serves an unlimited number of clients for a limited number of—or just one—web servers.

As an example, if slow.example.com is a “real” web server, and www.example.com is the Squid cache server that “accelerates” it, the first time any page is requested from www.example.com, the cache server would get the actual page from slow.example.com, but later requests would get the stored copy directly from the accelerator (for a configurable period, after which the stored copy would be discarded). The end result, without any action by the clients, is less traffic to the source server, meaning less CPU and memory usage, and less need for bandwidth. This does, however, mean that the source server cannot accurately report on its traffic numbers without additional configuration, as all requests would seem to have come from the reverse proxy. A way to adapt the reporting on the source server is to use the X_HTTP_FORWARDED_FOR HTTP header reported by the reverse proxy, to get the real client’s IP address.

It is possible for a single Squid server to serve both as a normal and a reverse proxy simultaneously.

Source : Wikipedia

Leave a Reply

Your email address will not be published. Required fields are marked *