What Needs to be Heeded When Checking Web Applications?
In most commercial and non-commercial areas the Internet has developed into an indispensible medium that offers users a huge number of interesting and important applications. Information procurement of any kind, buying services or products but also bank transactions and virtual official errands can be conducted easily and comfortably from the screen. Waiting times are a thing of the past and while we used to have to search laboriously for information, we now have the search engines that deliver the results in a matter of seconds. And so browsers and the web today dominate the majority of daily procedures in both our private as well as working lives. In order to facilitate all of these processes, a broad range of applications is required that are provided more or less publically. Their range extends from simple applications for searching for product information or forms, up to complex systems for auctions, product orders, Internet banking or processing quotations. They even control access to the company’s own intranet.
A major reason for these rapid developments is the almost unlimited possibilities to simplify, accelerate and make business processes more productive. Most enterprises and public authorities also see the web as an opportunity to make enormous cost savings, benefit from additional competitive advantages and open up new business opportunities. This requires a growing number of - and more powerful - applications that provide the Internet user with the required functions as fast and simply as possible.
Developers of such software programs are under enormous cost and time pressure. An increasing number of companies want to use the functionality of these so-called web applications for their business processes and offer their products, services and information as quickly as possible, simply and in a variety of ways. So guidelines for safe programming and release processes are usually not available or they are not heeded. In the end, this results in programming errors because major security aspects are deliberately disregarded or are simply forgotten. The productive use usually follows soon after development, without developers having checked the security status of the web applications sufficiently.
As a result, critical business processes that seemed secure within the corporate perimeter are suddenly freely accessible in the web. Conventional security strategies such as network firewalls or Intrusion Prevention Systems are no longer expedient here. Particularly in association with the web, the security requirements for applications have a different focus and are much higher than for traditional network security. The requirements of service providers who conduct security checks on business-critical systems with penetration tests should then also be respectively higher.
While most companies in the meantime protect their networks to a relatively high standard, the hackers have long since moved on to a different playing field. They now take advantage of security loopholes in web applications. There are several reasons for this: Compared with the network level, you don’t need to be highly skilled to use the Internet. This not only makes it easier to use legitimately, but also encourages the malicious misuse of web applications. In addition, the Internet also offers many possibilities for concealment and making action anonymous. As a result, the risk for attackers remains relatively low and so does the inhibition threshold for hackers.
Many web applications that are still active today were developed at a time when awareness for application security in the Internet had not yet been raised. There were hardly any threat scenarios because the attackers’ focus was directed at the internal IT structure of the companies. In the first years of web usage in particular, professional software engineering was not necessarily at the top of the agenda. So web applications usually went into productive operation without any clear security standards. Their security standard was based solely on how the individual developers rated this aspect and how high their respective knowledge was.
Some assume, that an unsecured web application cannot cause any damage as long as it does not conduct any security-relevant functions or provide any sensitive data. This is completely wrong. The opposite is the case. One single unsecured web application endangers the security of further systems that follow on, such as application or database servers. Equally wrong is the common misconception that the telecom providers’ security services would protect the data. Providers are not responsible for a safe use of web applications, regardless of where they are hosted. Suppliers and operators of web applications are the ones who have the big responsibility here towards all those who use their applications, one which they often do not fulfill.
Web Applications Under Fire
All injection attacks (such as SQL Injection, Command Injection, LDAP Injection, Script Injection, XPath Injection)
- Cross Site Scripting (XSS)
- Hidden Field Tampering
- Parameter Tampering
- Cookie Poisoning
- Buffer Overflow
- Forceful Browsing
- Unauthorized access to web servers
- Search Engine Poisoning
- Social Engineering
The only more recent trend: The attackers have recently started to combine the methods more often in order to obtain even higher success rates. And here it is no longer just the large corporations who are targeted because they usually guard and conceal their systems better. Instead, an increasing number of smaller companies are now in the crossfire.
One example: Attackers know that a certain commercial software program is widely used for shopping carts in online shops, and that the smaller companies rarely patch the weak points. They launch automated attacks in order to identify - with high efficiency - as many worthwhile targets as possible in the web. In this step they already gather the required data about the underlying software, the operating system or the database from web applications, which give away information freely. The attackers then only have to evaluate this information. As such they have an extensive basis for later targeted attacks.
How to Make a Web Application Secure
However, this intention is generally doomed to failure from the outset, because the later integration of security functions in an existing application is in most cases not only difficult, but also above all, expensive. Another example: a program that had until now not processed its inputs and outputs via centralized interfaces is to be enhanced to allow the data to be checked. It is then not sufficient to just add new functions. The developers must start by precisely analyzing the program and then making deep inroads into its basic structures. This is not only tedious, but also harbors the danger of making mistakes. Another example is programs that do not just use the session attributes for authentification. In this case it is not straightforward to update the session ID after logging in. This makes the application susceptible for Session Fixation.
If existing web applications display weak spots – and the probability is relatively high – then it should be clarified whether it makes business sense to correct them. It should not be forgotten here that other systems are put at risk by the unsecured application. A risk analysis can bring clarity, whether and to what extent the problems must be resolved or if further measures should be taken at the same time. Often however, the program developers are no longer available and training new developers as well as analyzing the web application results in additional costs.
The situation is not much better with web applications that are to be developed from scratch. There is no software program that ever went into productive operation free of errors or without weak spots. The shortcomings are frequently uncovered over time. And by this time correcting the errors is once again time-consuming and expensive. In addition, the application cannot be deactivated during this period if it works as a sales driver or as an important business process. Despite this, the demand for good code programming that sensibly combines effectiveness, functionality and security still has top priority. The safer a web application is written, the lower the improvement work and the less complex the external security measures that have to be adopted.
The second approach in addition to “secure programming” is the general safeguarding of web applications with a special security system from the time it goes into operation. Such security systems are called Web Application Firewalls (WAF) and safeguard the operation of web applications.
A WAF should protect web applications against attacks via the Hypertext Transfer Protocol (HTTP). As such it represents a special case of Application-Level-Firewalls (ALF) or Application-Level-Gateways (ALG). In contrast with classic firewalls and “Intrusion Detection” systems (IDS) a WAF checks the communications at the application level. Normally, the web application to be protected does not have to be changed.
Secure programming and WAFs are not contradictory, but actually complement each other: Analog to flight traffic it is without doubt important that the airplane (the application itself) is well serviced and safe. But even the perfect airplane can never replace the security gate at the airport (the Web Application Firewall), which, as the first security layer, considerably minimizes the risks of attacks on any weak spots.
After introducing a WAF, it is still recommendable to check the security functions, as conducted by Penetration Testers. This might reveal for example that the system can be misused by SQL Injection by entering inverted commas. It would be a costly procedure to correct this error in the web application. If a WAF is also deployed as a protective system, then this can be configured to filter the inverted commas out of the data traffic. This simple example shows at the same time that it is not sufficient to just position a WAF in front of the web application without an analysis. This would lead to misjudging the achieved security status: Filtering out special characters does not always prevent an attack based on the SQL Injection principle. Additionally the system performance would suffer, as the security rules would have to be set as restrictively as possible in order to exclude all possible threats. In this context too, penetration tests make an important contribution to increasing the Web Application Security.
A major advantage of WAFs is that one single system can close the security loopholes for several web
applications. If they are run in redundant mode they can also conduct load balancing functions in order to
distribute data traffic better and increase the performance for the web applications. With content caching
functions they reduce the load on the backend web servers and via automated compression procedures
they reduce the band width requirements of the client browser.
In order to protect the web applications, the WAFs filter the data flow between the browser and the web application. If an entry pattern emerges here that is defined as invalid, then the WAF interrupts the data transfer or reacts in a different way that has been predefined in the configuration diagram. If for example, two parameters have been defined for a monitored entry form, then the WAF can block all requests that contain three or more parameters. In this way the length and the contents of parameters can also be checked. Many attacks can be prevented or at least made more difficult just by specifying general rules about the parameter quality, such as their maximum length, valid number of characters and permitted value area.
Several WAFs have the option of monitoring the data sent by the web server to the browser in such a way that they can “learn” their nature. In this way these filters can - to a certain extent - automatically prevent malicious code from reaching the browser, if for example a web application does not conduct sufficient checks of the original data. Learning Mode is a profiling mode that indexes every URL and parameter in a stream of traffic in order to build a whitelist of acceptable URLs and parameters. However in practice, a whitelist only approach is quite cumbersome, requiring constant re-learning if there are any changes to the application. As a result, whitelist only approaches quickly become out of date due to the constant tuning required to maintain the whitelist profiles. However, the contrary blacklist only approach offers attackers too many loopholes. Consequently the ideal solution should rely on a combination of both whitelisting and blacklisting. This can be made easy-to-use by using templated negative security profile (e.g. for standard usages like Outlook Web Access, Sharepoint or Oracle applications) augmented by a whitelist for high value sub-section like an order entry page.
Demands on Penetration Testers
- Does the system have a Web Application Firewall?
- Does the web traffic occur via a WAF Proxy function?
- Are the web servers shielded against direct access by attackers?
- Is there a simple SSL encryption for the data traffic, even if the application or the server do not support this?
- Are all known and unknown threats blocked?
A further point is the protection against data theft. This involves checking whether the protection mechanism checks the outgoing data traffic for the possible withdrawal of sensitive data and then stops it.