The Decorator Pattern

2014-01-06

decoratingOne design pattern that I don’t see being used very often is Decorator.

I’m not sure why this pattern isn’t more popular, as it’s quite handy.

The Decorator pattern allows one to add functionality to an object in a controlled manner. This works at runtime, even with statically typed languages!

The decorator pattern is an alternative to subclassing. Subclassing adds behavior at compile time, and the change affects all instances of the original class; decorating can provide new behavior at run-time for individual objects.

The Decorator pattern is a good tool for adhering to the open/closed principle.

Some examples may show the value of this pattern.

Example 1: HTTP Authentication

Imagine an HTTP client, for example one that talks to a RESTful service.

Some parts of the service are publicly accessible, but some require the user to log in. The RESTful service responds with a 401 Unauthorized status code when the client tries to access a protected resource.

Changing the client to handle the 401 leads to duplication, since every call could potentially require authentication. So we should extract the authentication code into one place. Where would that place be, though?

Here’s where the Decorator pattern comes in:

public class AuthenticatingHttpClient
    implements HttpClient {

  private final HttpClient wrapped;

  public AuthenticatingHttpClient(HttpClient wrapped) {
    this.wrapped = wrapped;
  }

  @Override
  public Response execute(Request request) {
    Response response = wrapped.execute(request);
    if (response.getStatusCode() == 401) {
      authenticate();
      response = wrapped.execute(request);
    }
    return response;
  }

  protected void authenticate() {
    // ...
  }

}

A REST client now never has to worry about authentication, since the AuthenticatingHttpClient handles that.

Example 2: Caching Authorization Decisions

OK, so the user has logged in, and the REST server knows her identity. It may decide to allow access to a certain resource to one person, but not to another.

IOW, it may implement authorization, perhaps using XACML. In that case, a Policy Decision Point (PDP) is responsible for deciding on access requests.

Checking permissions it often expensive, especially when the permissions become more fine-grained and the access policies more complex. Since access policies usually don’t change very often, this is a perfect candidate for caching.

This is another instance where the Decorator pattern may come in handy:

public class CachingPdp implements Pdp {

  private final Pdp wrapped;

  public CachingPdp(Pdp wrapped) {
    this.wrapped = wrapped;
  }

  @Override
  public ResponseContext decide(
      RequestContext request) {
    ResponseContext response = getCached(request);
    if (response == null) {
      response = wrapped.decide(request);
      cache(request, response);
    }
    return response;
  }

  protected ResponseContext getCached(
      RequestContext request) {
    // ...
  }

  protected void cache(RequestContext request, 
      ResponseContext response) {
    // ...
  }

}

As you can see, the code is very similar to the first example, which is why we call this a pattern.

As you may have guessed from these two examples, the Decorator pattern is really useful for implementing cross-cutting concerns, like the security features of authentication, authorization, and auditing, but that’s certainly not the only place where it shines.

If you look carefully, I’m sure you’ll be able to spot many more opportunities for putting this pattern to work.


REST 101 For Developers

2013-09-16

rest-easy

Local Code Execution

Functions in high-level languages like C are compiled into procedures in assembly. They add a level of indirection that frees us from having to think about memory addresses.

Methods and polymorphism in object-oriented languages like Java add another level of indirection that frees us from having to think about the specific variant of a set of similar functions.

Despite these indirections, methods are basically still procedure calls, telling the computer to switch execution flow from one memory location to another. All of this happens in the same process running on the same computer.

Remote Code Execution

This is fundamentally different from switching execution to another process or another computer. Especially the latter is very different, as the other computer may not even have the same operating system through which programs access memory.

It is therefore no surprise that mechanisms of remote code execution that try to hide this difference as much as possible, like RMI or SOAP, have largely failed. Such technologies employ what is known as Remote Procedure Calls (RPCs).

rpcOne reason we must distinguish between local and remote procedure calls is that RPCs are a lot slower.

For most practical applications, this changes the nature of the calls you make: you’ll want to make less remote calls that are more coarsely grained.

Another reason is more organizational than technical in nature.

When the code you’re calling lives in another process on another computer, chances are that the other process is written and deployed by someone else. For the two pieces of code to cooperate well, some form of coordination is required. That’s the price we pay for coupling.

Coordinating Change With Interfaces

We can also see this problem in a single process, for instance when code is deployed in different jar files. If you upgrade a third party jar file that your code depends on, you may need to change your code to keep everything working.

Such coordination is annoying. It would be much nicer if we could simply deploy the latest security patch of that jar without having to worry about breaking our code. Fortunately, we can if we’re careful.

interfaceInterfaces in languages like Java separate the public and private parts of code.

The public part is what clients depend on, so you must evolve interfaces in careful ways to avoid breaking clients.

The private part, in contrast, can be changed at will.

From Interfaces to Services

In OSGi, interfaces are the basis for what are called micro-services. By publishing services in a registry, we can remove the need for clients to know what object implements a given interface. In other words, clients can discover the identity of the object that provides the service. The service registry becomes our entry point for accessing functionality.

There is a reason these interfaces are referred to as micro-services: they are miniature versions of the services that make up a Service Oriented Architecture (SOA).

A straightforward extrapolation of micro-services to “SOA services” leads to RPC-style implementations, for instance with SOAP. However, we’ve established earlier that RPCs are not the best way to invoke remote code.

Enter REST.

RESTful Services

rest-easyRepresentational State Transfer (REST) is an architectural style that brings the advantages of the Web to the world of programs.

There is no denying the scalability of the Web, so this is an interesting angle.

Instead of explaining REST as it’s usually done by exploring its architectural constraints, let’s compare it to micro-services.

A well-designed RESTful service has a single entry point, like the micro-services registry. This entry point may take the form of a home resource.

We access the home resource like any other resource: through a representation. A representation is a series of bytes that we need to interpret. The rules for this interpretation are given by the media type.

Most RESTful services these days serve representations based on JSON or XML. The media type of a resource compares to the interface of an object.

Some interfaces contain methods that give us access to other interfaces. Similarly, a representation of a resource may contain hyperlinks to other resources.

Code-Based vs Data-Based Services

soapThe difference between REST and SOAP is now becoming apparent.

In SOAP, like in micro-services, the interface is made up of methods. In other words, it’s code based.

In REST, on the other hand, the interface is made up of code and data. We’ve already seen the data: the representation described by the media type. The code is the uniform interface, which means that it’s the same (uniform) for all resources.

In practice, the uniform interface consists of the HTTP methods GET, POST, PUT, and DELETE.

Since the uniform interface is fixed for all resources, the real juice in any RESTful service is not in the code, but in the data: the media type.

Just as there are rules for evolving a Java interface, there are rules for evolving a media type, for example for XML-based media types. (From this it follows that you can’t use XML Schema validation for XML-based media types.)

Uniform Resource Identifiers

So far I haven’t mentioned Uniform Resource Identifiers (URIs). The documentation of many so-called RESTful services may give you the impression that they are important.

identityHowever, since URIs identify resources, their equivalent in micro-services are the identities of the objects implementing the interfaces.

Hopefully this shows that clients shouldn’t care about URIs. Only the URI of the home resource is important.

The representation of the home resource contains links to other resources. The meaning of those links is indicated by link relations.

Through its understanding of link relations, a client can decide which links it wants to follow and discover their URIs from the representation.

Versions of Services

evolutionAs much as possible, we should follow the rules for evolving media types and not introduce any breaking changes.

However, sometimes that might be unavoidable. We should then create a new version of the service.

Since URIs are not part of the public interface of a RESTful API, they are not the right vehicle for relaying version information. The correct way to indicate major (i.e. non-compatible) versions of an API can be derived by comparison with micro-services.

Whenever a service introduces a breaking change, it should change its interface. In a RESTful API, this means changing the media type. The client can then use content negotiation to request a media type it understands.

What Do You Think?

what-do-you-thinkLiterature explaining how to design and document code-based interfaces is readily available.

This is not the case for data-based interfaces like media types.

With RESTful services becoming ever more popular, that is a gap that needs filling. I’ll get back to this topic in the future.

How do you design your services? How do you document them? Please share your ideas in the comments.


How To Implement Input Validation For REST resources

2013-08-19

rest-validationThe SaaS platform I’m working on has a RESTful interface that accepts XML payloads.

Implementing REST Resources

For a Java shop like us, it makes sense to use JAX-B to generate JavaBean classes from an XML Schema.

Working with XML (and JSON) payloads using JAX-B is very easy in a JAX-RS environment like Jersey:

@Path("orders")
public class OrdersResource {
  @POST
  @Consumes({ "application/xml", "application/json" })
  public void place(Order order) {
    // Jersey marshalls the XML payload into the Order 
    // JavaBean, allowing us to write type-safe code 
    // using Order's getters and setters.
    int quantity = order.getQuantity();
    // ...
  }
}

(Note that you shouldn’t use these generic media types, but that’s a discussion for another day.)

The remainder of this post assumes JAX-B, but its main point is valid for other technologies as well. Whatever you do, please don’t use XMLDecoder, since that is open to a host of vulnerabilities.

Securing REST Resources

Let’s suppose the order’s quantity is used for billing, and we want to prevent people from stealing our money by entering a negative amount.

We can do that with input validation, one of the most important tools in the AppSec toolkit. Let’s look at some ways to implement it.

Input Validation With XML Schema

xml-schemaWe could rely on XML Schema for validation, but XML Schema can only validate so much.

Validating individual properties will probably work fine, but things get hairy when we want to validate relations between properties. For maximum flexibility, we’d like to use Java to express constraints.

More importantly, schema validation is generally not a good idea in a REST service.

A major goal of REST is to decouple client and server so that they can evolve separately.

If we validate against a schema, then a new client that sends a new property would break against an old server that doesn’t understand the new property. It’s usually better to silently ignore properties you don’t understand.

JAX-B does this right, and also the other way around: properties that are not sent by an old client end up as null. Consequently, the new server must be careful to handle null values properly.

Input Validation With Bean Validation

bean-validationIf we can’t use schema validation, then what about using JSR 303 Bean Validation?

Jersey supports Bean Validation by adding the jersey-bean-validation jar to your classpath.

There is an unofficial Maven plugin to add Bean Validation annotations to the JAX-B generated classes, but I’d rather use something better supported and that works with Gradle.

So let’s turn things around. We’ll handcraft our JavaBean and generate the XML Schema from the bean for documentation:

@XmlRootElement(name = "order")
public class Order {
  @XmlElement
  @Min(1)
  public int quantity;
}
@Path("orders")
public class OrdersResource {
  @POST
  @Consumes({ "application/xml", "application/json" })
  public void place(@Valid Order order) {
    // Jersey recognizes the @Valid annotation and
    // returns 400 when the JavaBean is not valid
  }
}

Any attempt to POST an order with a non-positive quantity will now give a 400 Bad Request status.

Now suppose we want to allow clients to change their pending orders. We’d use PATCH or PUT to update individual order properties, like quantity:

@Path("orders")
public class OrdersResource {
  @Path("{id}")
  @PUT
  @Consumes("application/x-www-form-urlencoded")
  public Order update(@PathParam("id") String id, 
      @Min(1) @FormParam("quantity") int quantity) {
    // ...
  }
}

We need to add the @Min annotation here too, which is duplication. To make this DRY, we can turn quantity into a class that is responsible for validation:

@Path("orders")
public class OrdersResource {
  @Path("{id}")
  @PUT
  @Consumes("application/x-www-form-urlencoded")
  public Order update(@PathParam("id") String id, 
      @FormParam("quantity")
      Quantity quantity) {
    // ...
  }
}
@XmlRootElement(name = "order")
public class Order {
  @XmlElement
  public Quantity quantity;
}
public class Quantity {
  private int value;

  public Quantity() { }

  public Quantity(String value) {
    try {
      setValue(Integer.parseInt(value));
    } catch (ValidationException e) {
      throw new IllegalArgumentException(e);
    }
  }

  public int getValue() {
    return value;
  }

  @XmlValue
  public void setValue(int value) 
      throws ValidationException {
    if (value < 1) {
      throw new ValidationException(
          "Quantity value must be positive, but is: " 
          + value);
    }
    this.value = value;
  }
}

We need a public no-arg constructor for JAX-B to be able to unmarshall the payload into a JavaBean and another constructor that takes a String for the @FormParam to work.

setValue() throws javax.xml.bind.ValidationException so that JAX-B will stop unmarshalling. However, Jersey returns a 500 Internal Server Error when it sees an exception.

We can fix that by mapping validation exceptions onto 400 status codes using an exception mapper. While we’re at it, let’s do the same for IllegalArgumentException:

@Provider
public class DefaultExceptionMapper 
    implements ExceptionMapper<Throwable> {

  @Override
  public Response toResponse(Throwable exception) {
    Throwable badRequestException 
        = getBadRequestException(exception);
    if (badRequestException != null) {
      return Response.status(Status.BAD_REQUEST)
          .entity(badRequestException.getMessage())
          .build();
    }
    if (exception instanceof WebApplicationException) {
      return ((WebApplicationException)exception)
          .getResponse();
    }
    return Response.serverError()
        .entity(exception.getMessage())
        .build();
  }

  private Throwable getBadRequestException(
      Throwable exception) {
    if (exception instanceof ValidationException) {
      return exception;
    }
    Throwable cause = exception.getCause();
    if (cause != null && cause != exception) {
      Throwable result = getBadRequestException(cause);
      if (result != null) {
        return result;
      }
    }
    if (exception instanceof IllegalArgumentException) {
      return exception;
    }
    if (exception instanceof BadRequestException) {
      return exception;
    }
    return null;
  }

}

Input Validation By Domain Objects

dddEven though the approach outlined above will work quite well for many applications, it is fundamentally flawed.

At first sight, proponents of Domain-Driven Design (DDD) might like the idea of creating the Quantity class.

But the Order and Quantity classes do not model domain concepts; they model REST representations. This distinction may be subtle, but it is important.

DDD deals with domain concepts, while REST deals with representations of those concepts. Domain concepts are discovered, but representations are designed and are subject to all kinds of trade-offs.

For instance, a collection REST resource may use paging to prevent sending too much data over the wire. Another REST resource may combine several domain concepts to make the client-server protocol less chatty.

A REST resource may even have no corresponding domain concept at all. For example, a POST may return 202 Accepted and point to a REST resource that represents the progress of an asynchronous transaction.

ubiquitous-languageDomain objects need to capture the ubiquitous language as closely as possible, and must be free from trade-offs to make the functionality work.

When designing REST resources, on the other hand, one needs to make trade-offs to meet non-functional requirements like performance, scalability, and evolvability.

That’s why I don’t think an approach like RESTful Objects will work. (For similar reasons, I don’t believe in Naked Objects for the UI.)

Adding validation to the JavaBeans that are our resource representations means that those beans now have two reasons to change, which is a clear violation of the Single Responsibility Principle.

We get a much cleaner architecture when we use JAX-B JavaBeans only for our REST representations and create separate domain objects that handle validation.

Putting validation in domain objects is what Dan Bergh Johnsson refers to as Domain-Driven Security.

cave-artIn this approach, primitive types are replaced with value objects. (Some people even argue against using any Strings at all.)

At first it may seem overkill to create a whole new class to hold a single integer, but I urge you to give it a try. You may find that getting rid of primitive obsession provides value even beyond validation.

What do you think?

How do you handle input validation in your RESTful services? What do you think of Domain-Driven Security? Please leave a comment.


Securing HTTP-based APIs With Signatures

2013-07-29

CloudSecurityI work at EMC on a platform on top of which SaaS solutions can be built.

This platform has a RESTful HTTP-based API, just like a growing number of other applications.

With development frameworks like JAX-RS, it’s relatively easy to build such APIs.

It is not, however, easy to build them right.

Issues With Building HTTP-based APIs

The problem isn’t so much in getting the functionality out there. We know how to develop software and the available REST/HTTP frameworks and libraries make it easy to expose the functionality.

That’s only half the story, however. There are many more -ilities to consider.

rest-easyThe REST architectural style addresses some of those, like scalability and evolvability.

Many HTTP-based APIs today claim to be RESTful, but in fact are not. This means that they are not reaping all of the benefits that REST can bring.

I’ll be talking more about how to help developers meet all the constraints of the REST architectural style in future posts.

Today I want to focus on another non-functional aspect of APIs: security.

Security of HTTP-based APIs

In security, we care about the CIA-triad: Confidentiality, Integrity, and availability.

Availability of web services is not dramatically different from that of web applications, which is relatively well understood. We have our clusters, load balancers, and what not, and usually we are in good shape.

Confidentiality and integrity, on the other hand, both require proper authentication, and here matters get more interesting.

Authentication of HTTP-based APIs

authenticationFor authentication in an HTTP world, it makes sense to look at HTTP Authentication.

This RFC describes Basic and Digest authentication. Both have their weaknesses, which is why you see many APIs use alternatives.

Luckily, these alternatives can use the same basic machinery defined in the RFC. This machinery includes status code 401 Unauthorized, and the WWW-Authenticate, Authentication-Info, and Authorization headers. Note that the Authorization header is unfortunately misnamed, since it’s used for authentication, not authorization.

The final piece of the puzzle is the custom authentication scheme. For example, Amazon S3 authentication uses the AWS custom scheme.

Authentication of HTTP-based APIs Using Signatures

The AWS scheme relies on signatures. Other services, like EMC Atmos, use the same approach.

It is therefore good to see that a new IETF draft has been proposed to standardize the use of signatures in HTTP-based APIs.

Standardization enables the construction of frameworks and libraries, which will drive down the cost of implementing authentication and will make it easier to build more secure APIs.

What do you think?

what-do-you-thinkIf you’re in the HTTP API building and/or consuming business –and who isn’t these days– then please go ahead and read the draft and provide feedback.

I’m also interested in your experiences with building or consuming secure HTTP APIs. Please leave a comment on this post.


Is XACML Dead?

2013-05-08

ripXACML is dead. Or so writes Forrester’s Andras Cser.

Before I take a critical look at the reasons underlying this claim, let me disclose that I’m a member of the OASIS committee that defines the XACML specification. So I may be a little biased.

Lack of broad adoption

The first reason for claiming XACML dead is the lack of adoption. Being a techie, I don’t see a lot of customers, so I have to assume Forrester knows better than me.

At last year’s XACML Seminar in the Netherlands, there were indeed not many people who actually used XACML, but the room was filled with people who were at least interested enough to pay to hear about practical experiences with XACML.

I also know that XACML is in use at large enterprises like Bank of America, Bell Helicopter, and Boeing, to name just some Bs. And the supplier side is certainly not the problem.

So there is some adoption, buI grant that it’s not broad.

Inability to serve the federated, extended enterprise

XACML was designed to meet the authorization needs of the monolithic enterprise where all users are managed centrally in AD.

extended-enterpriseI don’t understand this statement at all, as there is nothing in the XACML spec that depends on centrally managed users.

Especially in combination with SAML, XACML can handle federated scenarios perfectly fine.

In my current project, we’re using XACML in a multi-tenant environment where each tenant uses their own identity provider. No problem.

PDP does a lot of complex things that it does not inform the PEP about

The PDP is apparently supposed to tell the PEP why access is denied. I don’t get that: I’ve never seen an application that greyed out a button and included the text “You need the admin role to perform this operation”.

Maybe this is about testing access control policies. Or maybe I just don’t understand the problem. I’d love to learn more about this.

Not suitable for cloud and distributed deployment

CloudSecurityI guess what they mean is that fine-grained access control doesn’t work well in high latency environments. If so, sure.

XACML doesn’t prescribe how fine-grained your policies have to be, however, so I can’t see how this could be XACML’s fault. That’s like blaming my keyboard for allowing me to type more characters than fit in a tweet.

Actually, I’d say that XACML works very well in the cloud. And with the recently approved REST profile and the upcoming JSON profile, XACML will be even better suited for cloud solutions.

Commercial support is non-existent

This is lack of adoption again.

BTW, absolute claims like “there is no software library with PEP support” turn you into an easy target. All it takes is one counter example to prove you wrong.

Refactoring and rebuilding existing in-house applications is not an option

This, I think, is the main reason for slow adoption: legacy applications create inertia. We see the same thing with SSO. Even today, there are EMC internal applications that require me to maintain separate credentials.

The problem is worse for authorization. Authentication is a one-time thing at the start of a session, but authorization happens all the time. There are simply more places in an application that require modification.

There may be some light at the end of the tunnel, however.

Under constant attackHistory shows that inertia can be overcome by a large enough force.

That force might be the changing threat landscape. We’ll see.

OAuth supports the mobile application endpoint in a lightweight manner

OAuth does well in the mobile space. One reason is that mobile apps usually provide focused functionality that doesn’t require fine-grained access control decisions. It remains to be seen whether that continues to be true as mobile apps get more advanced.

Of course, if all your access control needs can be implemented with one yes/no question, then using XACML is overkill. That doesn’t, however, mean there is no place for XACML is the many, many places where life is not that simple.

What do you think?

All in all, I’m certainly not convinced by Forrester’s claim that XACML is dead. Are you? If XACML were buried, what would you use instead?

Update: Others have joined in the discussion and confirmed that XACML is not dead:

  • Gary from XACML vendor Axiomatics
  • Danny from XACML vendor Dell
  • Anil from open source XACML implementation JBoss PicketBox
  • Ian from analyst Gartner

Update 2: More people joined the discussion. One is confused, one is confusing, and Forrester’s Eva Mahler (of SGML and UMA fame) backs her colleague.

Update 3: Another analyst joins the discussion: KuppingerCole doesn’t think XACML is dead either.

Update 4: CA keeps supporting XACML in their SiteMinder product.


Securing Mobile Java Code

2012-10-01

Mobile Code is code sourced from remote, possibly untrusted systems, that are executed on your local system. Mobile code is an optional constraint in the REST architectural style.

This post investigates our options for securely running mobile code in general, and for Java in particular.

Mobile Code

Examples of mobile code range from JavaScript fragments found in web pages to plug-ins for applications like FireFox and Eclipse.

Plug-ins turn a simple application into an extensible platform, which is one reason they are so popular. If you are going to support plug-ins in your application, then you should understand the security implications of doing so.

Types of Mobile Code

Mobile code comes in different forms. Some mobile code is source code, like JavaScript.

Mobile code in source form requires an interpreter to execute, like JägerMonkey in FireFox.

Mobile code can also be found in the form of executable code.

This can either be intermediate code, like Java applets, or native binary code, like Adobe’s Flash Player.

Active Content Delivers Mobile Code

A concept that is related to mobile code is active content, which is defined by NIST as

Electronic documents that can carry out or trigger actions automatically on a computer platform without the intervention of a user.

Examples of active content are HTML pages or PDF documents containing scripts and Office documents containing macros.

Active content is a vehicle for delivering mobile code, which makes it a popular technology for use in phishing attacks.

Security Issues With Mobile Code

There are two classes of security problems associated with mobile code.

The first deals with getting the code safely from the remote to the local system. We need to control who may initiate the code transfer, for example, and we must ensure the confidentiality and integrity of the transferred code.

From the point of view of this class of issues, mobile code is just data, and we can rely on the usual solutions for securing the transfer. For instance, XACML may be used to control who may initiate the transfer, and SSL/TLS may be used to protect the actual transfer.

It gets more interesting with the second class of issues, where we deal with executing the mobile code. Since the remote source is potentially untrusted, we’d like to limit what the code can do. For instance, we probably don’t want to allow mobile code to send credit card data to its developer.

However, it’s not just malicious code we want to protect ourselves from.

A simple bug that causes the mobile code to go into an infinite loop will threaten your application’s availability.

The bottom line is that if you want your application to maintain a certain level of security, then you must make sure that any third-party code meets that same standard. This includes mobile code and embedded libraries and components.

That’s why third-party code should get a prominent place in a Security Development Lifecycle (SDL).

Safely Executing Mobile Code

In general, we have four types of safeguards at our disposal to ensure the safe execution of mobile code:

  • Proofs
  • Signatures
  • Filters
  • Cages (sandboxes)

We will look at each of those in the context of mobile Java code.

Proofs

It’s theoretically possible to present a formal proof that some piece of code possesses certain safety properties. This proof could be tied to the code and the combination is then proof carrying code.

After download, the code could be checked against the code by a verifier. Only code that passes the verification check would be allowed to execute.

Updated for Bas’ comment:
Since Java 6, the StackMapTable attribute implements a limited form of proof carrying code where the type safety of the Java code is verified. However, this is certainly not enough to guarantee that the code is secure, and other approaches remain necessary.

Signatures

One of those approaches is to verify that the mobile code is made by a trusted source and that it has not been tampered with.

For Java code, this means wrapping the code in a jar file and signing and verifying the jar.

Filters

We can limit what mobile content can be downloaded. Since we want to use signatures, we should only accept jar files. Other media types, including individual .class files, can simply be filtered out.

Next, we can filter out downloaded jar files that are not signed, or signed with a certificate that we don’t trust.

We can also use anti-virus software to scan the verified jars for known malware.

Finally, we can use a firewall to filter out any outbound requests using protocols/ports/hosts that we know our code will never need. That limits what any code can do, including the mobile code.

Cages/Sandboxes

After restricting what mobile code may run at all, we should take the next step: prevent the running code from doing harm by restricting what it can do.

We can intercept calls at run-time and block any that would violate our security policy. In other words, we put the mobile code in a cage or sandbox.

In Java, cages can be implemented using the Security Manager. In a future post, we’ll take a closer look at how to do this.


Supporting Multiple XACML Representations

2012-09-17

We’re in the process of registering an XML media type for the eXtensible Access Control Markup Language (XACML). Simultaneously, the XACML Technical Committee is working on a JSON format.

Both media types are useful in the context of another committee effort, the REST profile. This post explains what benefit these profiles will bring once approved, and how to support them in clients and servers.

Media Types Support Content Negotiation

With the REST profile, any application can communicate with a Policy Decision Point (PDP) in a RESTful manner. The media types make it possible to communicate with such a PDP in a manner that is most convenient for the client, using a process called content negotiation.

For instance, a web application that is mainly implemented in JavaScript may prefer to use JSON for communication with the PDP, to avoid having to bring in infrastructure to deal with XML.

Content negotiation is not just a convenience feature, however. It also facilitates evolution.

A server with many clients that understand 2.0 may start also serving 3.0, for instance. The older clients stay functional using 2.0, whereas newer clients can communicate in 3.0 syntax with the same server.

This avoids having to upgrade all the clients at the same time as the server.

So how does a server that supports multiple versions and/or formats know which one to serve to a particular client? The answer is the Accept HTTP header. For instance, a client can send Accept: application/xacml+xml; version=2.0 to get an XACML 2.0 XML format, or Accept: application/xacml+json; version=3.0 to get an XACML 3.0 JSON answer.

The value for the Accept header is a list of media types that are acceptable to the client, in decreasing order of precedence. For instance, a new client could prefer 3.0, but still work with older servers that only support 2.0 by sending Accept: application/xacml+xml; version=3.0, application/xacml+xml; version=2.0.

Supporting Multiple Versions and Formats

So there is value for both servers and clients to support multiple versions and/or formats. Now how does one go about implementing this? The short answer is: using indirection.

The longer answer is to make an abstraction for the version/format combination. We’ll dub this abstraction a representation.

For instance, an XACML request is really not much more than a collection of categorized attributes, while a response is basically a collection of results.

Instead of working with, say, the XACML 3.0 XML form of a request, the client or server code should work with the abstract representation. For each version/format combination, you then add a parser and a builder.

The parser reads the concrete syntax and creates the abstract representation from it. Conversely, the builder takes the abstract representation and converts it to the desired concrete syntax.

In many cases, you can re-use parts of the parsers and builders between representations. For instance, all the XML formats of XACML have in common that they require XML parsing/serialization.

In a design like this, no code ever needs to be modified when a new version of the specification or a new serialization format comes out. All you have to do is add a parser and a builder, and all the other code can stay the way it is.

The only exception is when a new version introduces new capabilities and your code wants to use those. In that case, you probably must also change the abstract representation to accommodate the new functionality.


XACML Vendor: Axiomatics

2012-07-30

This is the second in a series of posts where I interview XACML vendors. This time it’s Axiomatics’ turn. Their CTO Erik Rissanen is editor of the XACML 3.0 specification.

Why does the world need XACML? What benefits do your customers realize?

The world needs a standardized way to externalize authorization processing from the rest of the application logic – this is where the XACML standard comes in. Customers have different requirements for implementing externalized authorization and, therefore, can derive different benefits.

Here are some of the key benefits we have seen for customers:

  • The ability to share sensitive data with customers, partners and supply chain members
  • Implement fine grained authorization at every level of the application – presentation, application, middleware and data tiers
  • Deploy applications with clearly audit-able access control
  • Build and deploy applications and services faster than the competition
  • Move workloads more easily to the most efficient compute, storage or data capacity
  • Protect access to applications and resources regardless of where they are hosted
  • Implement access control consistently across all layers of an application as well as across application environments deployed on different platforms
  • Exploit dynamic access controls that are much more flexible than roles

What products do you have in the XACML space?

Axiomatics has three core products today:

  • The Axiomatics Policy Server which is a modular XACML-driven authorization server. It fully implements XACML 2.0 and XACML 3.0 and respects the XACML architecture.
  • The Axiomatics Policy Auditor which is a web-based product administrators and business users alike can use to analyze XACML policies to identify security gaps or create a list of entitlements. Generally, the auditor helps answer the question “How can an access be granted?”
  • The Axiomatics Reverse Query takes on a novel approach to authorization. Where one typically creates binary requests (Can Alice do this?) and the Axiomatics Policy Server would reply with a Yes or No, the Axiomatics Reverse Query helps invert the process to tackle the list question. We have noticed that our customers sometimes want to know the list of users that have access to an application or the list of resources a given user can access. This is what we call the list question or reverse querying.
    The Axiomatics Reverse Query is an SDK that requires integration with a given application. With this in mind, Axiomatics engineering have developed extra glue / integration layers to plug into target environments and products. For instance, Axiomatics will release shortly the Axiomatics Reverse Query for Oracle Virtual Private Database. Axiomatics also uses the SDK to drive authorization inside Windows Server 2012. And there are many more integration options we have yet to explore.

In addition, Axiomatics has now released a free tool and a new language called ALFA, the Axiomatics Language for Authorization. ALFA is a lightweight version of XACML with shorthand notations. It borrows much of its syntax from programming languages developers are most familiar with e.g. Java and C#. The tool is a free plugin for the Eclipse IDE which lets developers author ALFA using the usual Eclipse features such as syntax checking and auto-complete. The plugin eventually generates XACML 3.0 conformant policies on the fly from the ALFA the developers write. Axiomatics published a video on its YouTube channel showing how to use the tool.

What versions of the spec do you support? What optional parts? What profiles?

Axiomatics fully supports XACML 2.0 and XACML 3.0 including all optional profiles as specified in our attestation email.

What sets your product(s) apart from the competition?

Axiomatics has historically been what we could call a pure play XACML vendor. This reflects our dedication to the standard and the fact that Axiomatics implements the XACML core and all profiles – no other vendor has adopted such a comprehensive strategy. Furthermore, Axiomatics only uses the XACML policy language, rather than attempting to convert between XACML and one or more proprietary, legacy policy language formats. The comprehensiveness of the XACML policy language gives customers the most flexibility – as well as interoperability – across a multitude of applications and usage scenarios.

This also made Axiomatics a very generic solution for all things fine-grained authorization. This means the Axiomatics solution can be applied to any type of application, in particular .NET or J2SE/J2EE applications but also increasingly COTS such as SharePoint and databases such as Oracle VPD.

Axiomatics also leverages the key benefits of the XACML architecture to provide a very modular set of products. This means our core engine can be plugged into a various set of frameworks extremely easily: the authorization engine can be embedded or exposed as a web service (SOAP, REST, Thrift…). It also means our products scale extremely well and allow for a single point of management with literally hundreds of decision points and as many enforcement points. This makes our product the fastest, most elegant approach to enterprise authorization.

Axiomatics’ auditing capablities are quite unique too: with the Policy Auditor, it is possible to know what could possibly happen, rather a simple audit of what did actually happen. This means it is easier than ever to produce reports that will keep auditors satisfied the enterprise is correctly protected.

Lastly, Axiomatics has over 6 years experience in the area and is always listening to its customers. As a result, new products have been designed to better address customer needs. One such example is our Axiomatics Reverse Query which reverses the authorization process to be able to tackle a new series of authorization requirements our customers in the financial sector had. Instead of getting yes/no answers, these customers wanted a list of resources a user can access (e.g. a list of bank accounts) or a list of employees who can view a given piece of information. By actively listening to our customers we are able to deliver new innovative products that best match their needs.

What customers use your product(s)? What is your biggest deployment?

Axiomatics has several Fortune 50 customers. Some of the world’s largest banks and enterprises are Axiomatics customers. Axiomatics customers are based in the US and Europe mainly. One famous customer where Axiomatics is used intensively is PayPal. It is probably Axiomatics’ current biggest deployment in terms of transactions.

A US-based bank has also deployed Axiomatics products across three continents in order to protect trading applications.

What programming languages do you support? Will you support the upcoming REST and JSON profiles?

Axiomatics supports Java and C#. Axiomatics has been used in customer deployments with other languages such as Python.

Axiomatics is active in defining the new REST profile of the XACML TC and will try to align with it as much as possible. Axiomatics is also leading the design of a JSON-based PEP-PDP interaction. JSON as well as Thrift are likely to be the next communication protocols supported.

Do you support OpenAz? Spring Security? Other open source efforts?

Axiomatics does not currently support OpenAZ but has been watching the specification in order to eventually take part. Axiomatics already supports Spring Security. In addition, there is a new open source initiative aimed at defining a standard PEP API which Axiomatics and other vendors are taking part in.

How easy is it to write a PEP for your product(s)? And a PIP? How long does an implementation of your product(s) usually take?

Should customers decide to write a custom PEP rather than use an off-the-shelf PEP, they can use a Java or C# SDK to quickly write PEPs. Axiomatics has published a video explaining how to write a PEP in 5 minutes and 20 lines of code.

An implementation of our product can take from 1 week to 3 months or more depending on the customer requirements, the complexity of the desired architecture, and the number of integration points.

Can your product(s) be embedded (i.e. run in-process)?

The Axiomatics PDP can be embedded. Customers sometimes choose this approach to achieve even greater levels of performance.

What optimizations have you made? Can you share performance numbers?

There are many factors such as number of policies, complexity of policies, number of PIP look-ups and others that have an effect on performance. One of our customers shared the result of their internal product evaluation where they reached 30.000 requests per second.

The Axiomatics PDP is also used to secure transactions for several hundred million users and protect the medical records of all 9 million Swedish citizens.


Behavior-Driven Development (BDD) with JBehave, Gradle, and Jenkins

2012-07-02

Behavior-Driven Development (BDD) is a collaborative process where the Product Owner, developers, and testers cooperate to deliver software that brings value to the business.

BDD is the logical next step up from Test-Driven Development (TDD).

Behavior-Driven Development

In essence, BDD is a way to deliver requirements. But not just any requirements, executable ones! With BDD, you write scenarios in a format that can be run against the software to ascertain whether the software behaves as desired.

Scenarios

Scenarios are written in Given, When, Then format, also known as Gherkin:

Given the ATM has $250
And my balance is $200
When I withdraw $150
Then the ATM has $100
And my balance is $50

Given indicates the initial context, When indicates the occurrence of an interesting event, and Then asserts an expected outcome. And may be used to in place of a repeating keyword, to make the scenario more readable.

Given/When/Then is a very powerful idiom, that allows for virtually any requirement to be described. Scenarios in this format are also easily parsed, so that we can automatically run them.

BDD scenarios are great for developers, since they provide quick and unequivocal feedback about whether the story is done. Not only the main success scenario, but also alternate and exception scenarios can be provided, as can abuse cases. The latter requires that the Product Owner not only collaborates with testers and developers, but also with security specialists. The payoff is that it becomes easier to manage security requirements.

Even though BDD is really about the collaborative process and not about tools, I’m going to focus on tools for the remainder of this post. Please keep in mind that tools can never save you, while communication and collaboration can. With that caveat out of the way, let’s get started on implementing BDD with some open source tools.

JBehave

JBehave is a BDD tool for Java. It parses the scenarios from story files, maps them to Java code, runs them via JUnit tests, and generates reports.

Eclipse

JBehave has a plug-in for Eclipse that makes writing stories easier with features such as syntax highlighting/checking, step completion, and navigation to the step implementation.

JUnit

Here’s how we run our stories using JUnit:

@RunWith(AnnotatedEmbedderRunner.class)
@UsingEmbedder(embedder = Embedder.class, generateViewAfterStories = true,
    ignoreFailureInStories = true, ignoreFailureInView = false, 
    verboseFailures = true)
@UsingSteps(instances = { NgisRestSteps.class })
public class StoriesTest extends JUnitStories {

  @Override
  protected List<String> storyPaths() {
    return new StoryFinder().findPaths(
        CodeLocations.codeLocationFromClass(getClass()).getFile(),
        Arrays.asList(getStoryFilter(storyPaths)), null);
  }

  private String getStoryFilter(String storyPaths) {
    if (storyPaths == null) {
      return "*.story";
    }
    if (storyPaths.endsWith(".story")) {
      return storyPaths;
    }
    return storyPaths + ".story";
  }

  private List<String> specifiedStoryPaths(String storyPaths) {
    List<String> result = new ArrayList<String>();
    URI cwd = new File("src/test/resources").toURI();
    for (String storyPath : storyPaths.split(File.pathSeparator)) {
      File storyFile = new File(storyPath);
      if (!storyFile.exists()) {
        throw new IllegalArgumentException("Story file not found: " 
          + storyPath);
      }
      result.add(cwd.relativize(storyFile.toURI()).toString());
    }
    return result;
  }

  @Override
  public Configuration configuration() {
    return super.configuration()
        .useStoryReporterBuilder(new StoryReporterBuilder()
            .withFormats(Format.XML, Format.STATS, Format.CONSOLE)
            .withRelativeDirectory("../build/jbehave")
        )
        .usePendingStepStrategy(new FailingUponPendingStep())
        .useFailureStrategy(new SilentlyAbsorbingFailure());
  }

}

This uses JUnit 4′s @RunWith annotation to indicate the class that will run the test. The AnnotatedEmbedderRunner is a JUnit Runner that JBehave provides. It looks for the @UsingEmbedder annotation to determine how to run the stories:

  • generateViewAfterStories instructs JBehave to create a test report after running the stories
  • ignoreFailureInStories prevents JBehave from throwing an exception when a story fails. This is essential for the integration with Jenkins, as we’ll see below

The @UsingSteps annotation links the steps in the scenarios to Java code. More on that below. You can list more than one class.

Our test class re-uses the JUnitStories class from JBehave that makes it easy to run multiple stories. We only have to implement two methods: storyPaths() and configuration().

The storyPaths() method tells JBehave where to find the stories to run. Our version is a little bit complicated because we want to be able to run tests from both our IDE and from the command line and because we want to be able to run either all stories or a specific sub-set.

We use the system property bdd.stories to indicate which stories to run. This includes support for wildcards. Our naming convention requires that the story file names start with the persona, so we can easily run all stories for a single persona using something like -Dbdd.stories=wanda_*.

The configuration() method tells JBehave how to run stories and report on them. We need output in XML for further processing in Jenkins, as we’ll see below.

One thing of interest is the location of the reports. JBehave supports Maven, which is fine, but they assume that everybody follows Maven conventions, which is really not. The output goes into a directory called target by default, but we can override that by specifying a path relative to the target directory. We use Gradle instead of Maven, and Gradle’s temporary files go into the build directory, not target. More on Gradle below.

Steps

Now we can run our stories, but they will fail. We need to tell JBehave how to map the Given/When/Then steps in the scenarios to Java code. The Steps classes determine what the vocabulary is that can be used in the scenarios. As such, they define a Domain Specific Language (DSL) for acceptance testing our application.

Our application has a RESTful interface, so we wrote a generic REST DSL. However, due to the HATEOAS constraint in REST, a client needs a lot of calls to discover the URIs that it should use. Writing scenarios gets pretty boring and repetitive that way, so we added an application-specific DSL on top of the REST DSL. This allows us to write scenarios in terms the Product Owner understands.

Layering the application-specific steps on top of generic REST steps has some advantages:

  • It’s easy to implement new application-specific DSL, since they only need to call the REST-specific DSL
  • The REST-specific DSL can be shared with other projects

Gradle

With the Steps in place, we can run our stories from our favorite IDE. That works great for developers, but can’t be used for Continuous Integration (CI).

Our CI server runs a headless build, so we need to be able to run the BDD scenarios from the command line. We automate our build with Gradle and Gradle can already run JUnit tests. However, our build is a multi-project build. We don’t want to run our BDD scenarios until all projects are built, a distribution is created, and the application is started.

So first off, we disable running tests on the project that contains the BDD stories:

test {
  onlyIf { false } // We need a running server
}

Next, we create another task that can be run after we start our application:

task acceptStories(type: Test) {
  ignoreFailures = true
  doFirst {
    // Need 'target' directory on *nix systems to get any output
    file('target').mkdirs()

    def filter = System.getProperty('bdd.stories') 
    if (filter == null) {
      filter = '*'
    }
    def stories = sourceSets.test.resources.matching { 
      it.include filter
    }.asPath
    systemProperty('bdd.stories', stories)
  }
}

Here we see the power of Gradle. We define a new task of type Test, so that it already can run JUnit tests. Next, we configure that task using a little Groovy script.

First, we must make sure the target directory exists. We don’t need or even want it, but without it, JBehave doesn’t work properly on *nix systems. I guess that’s a little Maven-ism :(

Next, we add support for running a sub-set of the stories, again using the bdd.stories system property. Our story files are located in src/test/resources, so that we can easily get access to them using the standard Gradle test source set. We then set the system property bdd.stories for the JVM that runs the tests.

Jenkins

So now we can run our BDD scenarios from both our IDE and the command line. The next step is to integrate them into our CI build.

We could just archive the JBehave reports as artifacts, but, to be honest, the reports that JBehave generates aren’t all that great. Fortunately, the JBehave team also maintains a plug-in for the Jenkins CI server. This plug-in requires prior installation of the xUnit plug-in.

After installation of the xUnit and JBehave plug-ins into jenkins, we can configure our Jenkins job to use the JBehave plug-in. First, add an xUnit post-build action. Then, select the JBehave test report.

With this configuration, the output from running JBehave on our BDD stories looks just like that for regular unit tests:

Note that the yellow part in the graph indicates pending steps. Those are used in the BDD scenarios, but have no counterpart in the Java Steps classes. Pending steps are shown in the Skip column in the test results:

Notice how the JBehave Jenkins plug-in translates stories to tests and scenarios to test methods. This makes it easy to spot which scenarios require more work.

Although the JBehave plug-in works quite well, there are two things that could be improved:

  • The output from the tests is not shown. This makes it hard to figure out why a scenario failed. We therefore also archive the JUnit test report
  • If you configure ignoreFailureInStories to be false, JBehave throws an exception on a failure, which truncates the XML output. The JBehave Jenkins plug-in can then no longer parse the XML (since it’s not well formed), and fails entirely, leaving you without test results

All in all these are minor inconveniences, and we ‘re very happy with our automated BDD scenarios.


LinkedIn Incident Shows Need for SecaaS

2012-06-11

Security is a negative feature.

What I mean by that is that you will never get kudos for implementing a secure system, but you certainly will get a lot of flak for an insecure system, as the recent LinkedIn incident shows. Therefore, security is a distraction for most developers; they’d rather focus on their core business of implementing features.

Where have we heard that before?

Cloud Computing to the Rescue: SecaaS

Cloud computing promises to free businesses from having to buy, install, and maintain their own software and hardware, so they can focus on their core business.

We can apply this idea to security as well. The Cloud Security Alliance (CSA) calls this Security as a Service, or SecaaS.

For instance, instead of learning how to store passwords securely, we could just use an authentication service and not store passwords ourselves at all.

The alternative of using libraries, while infinitely better than rolling our own, is less attractive than the utility model. We’d still have to update the libraries ourselves. Also, with libraries, we depend on a language level API, which segments the market, making it less efficient.

There are some hurdles to tackle before SecaaS will go mainstream. Let’s take a look at one issue close to my heart.

Security as a Service Requires Standards

The utility model of cloud computing requires standards. This is also true for SecaaS.

Each type of security service should have one or at most a few standards, to level the playing field for vendors and to allow for easy switching between offerings of different vendors.

For authorization, I think we already have such a standard: XACML.

We also need a more general standard that applies to all security services, so that we have a simple programming model for integrating different security services into our applications. I believe that standard should be REST.
Update: The market seems to agree with this. Just see the figures in this presentation.

This is one of the reasons why we’re working on a REST profile for XACML. Stay tuned for more information on that.

So what do you think about SecaaS? Let me know in the comments.


Follow

Get every new post delivered to your Inbox.

Join 272 other followers