<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1626328370711236&amp;ev=PageView&amp;noscript=1">
banner-blog.jpg

blog

The Evolution of the Web Gateway

Web-security-stock

For the duration of my career in IT Security (16 years+ now), I have been able to work with and test a large number of security products that attempt to secure users and businesses from the Web. Back in 1998, the Web was just emerging as a platform to do business on. The biggest risk organizations faced was filling up their pipe with traffic, while caching proxy solutions such as open-source Squid technology had emerged to optimize bandwidth usage. However, caching was just the tip of the iceberg in terms of new risks.

The types of issues that needed to be addressed then were seen across the board -  users going to adult, gambling and non-work related sites, downloading viruses, downloading large music files, downloading keygen tools and licence cracking software and many other things that clogged up PC’s and expensive Internet connections. As far back in 1999, when working as a Security Consultant, I was deploying emerging Web gateway solutions such as Websweeper (Content Technologies), WebWasher (Secure Computing, now Intel Security) and Finjan to attempt to mitigate the growing number of risks on the web. Whether it was a badly written Java app or an AV engine that would die after an update; the problems of detection, categorisation, performance and scale were immense. Getting all of them to play nicely in a chain was akin to walking a tightrope!

xkcd-malware

Since then, major innovations have been made in Web security to try and reduce this multi-vendor bloat as vendors seeked to consolidate this for customers, seeing an opportunity to simplify. These innovations included:

  • purpose built hardware for scalable performance
  • move into the cloud to help mobile workers spread across the world
  • reputation filtering to reduce access to high risk sites even if URL categories and AV engines verdicts said their content was safe
  • sandboxes in the cloud that stream web traffic to identify malicious websites serving exploits

All of these have incrementally improved detection for organizations but crucially not solved it entirely. Detection of ‘known’ threats on the Web is proving to be very hard for the industry, let alone ‘unknown’ threats. Numerous popular known malware families such as Dridex and Angler continue to evade detection despite best efforts in the industry to reduce the risks they pose to users and the internet. Furthermore, strategic threat actors are hiding in the open much more. Examples of attacks from APT threat actors such as watering hole style attacks (e.g. Council of Foreign Relations Dec 2012), and third party malvertising attacks such as the one that impacted Yahoo! recently means that every website poses a potential threat to users and their devices today. These websites would be typically considered safe, posing no risk to users; but endpoints visiting these sites would be compromised and would need validating and remediating at significant time and effort.

Businesses face the difficult issue of securing users on the Web whilst having to allow them to access sites, knowing that security measures in place don’t really have the ability to detect and stop all the threats. Simply; we need a new way to protect & secure our users on the primary resource every user in the workplace uses every hour of the day, without relying on detection of ‘bad’ stuff.

 

Tags: malware, web, gateway, proxy

Connect with us

Lists by Topic

see all

Recent Posts