Threat Modeling: Toward Comprehensive Computer Security

We tend to associate security with specific technologies–encryption, VPNs, authentication protocols, but no single technology guarantees security; attacks come from many different directions and target the least suspected of vulnerabilities. We need to make our best effort to ensure comprehensive protection. To achieve that end, threat modeling is an essential first step. While threat modeling might sound rather academic, it is in fact entirely practical and something you can and certainly should apply in your organization and to all of your applications. And as we’ll see in this post, there is almost certainly an available threat modeling methodology that will work for you.

To gain a broad view of threat modeling apart from specific technologies, it’s worth taking a step back and realizing threat modeling has its origins in the military dating back long before the computer age. When Sun Tzu wrote in the fifth century BC that “if you know the enemy and know yourself, you need not fear the result of a hundred battles,” he was extolling the value of threat modeling. In explaining that “first comes scoping, then measurement, then calculation, then balancing and finally victory,” Sun Tzu illustrated the importance of process and of comprehensiveness, the keys to successful threat modeling.

In the field of computer security, threat modeling achieves comprehensiveness through abstractions, beginning with broad categories of threats and system architectures rather than implementation details or concrete attacks. Such abstraction encourages us to think through a broad array of threats and prevents us from getting caught up in a small number of specific threats. It prevents us from building multiple layers of mitigation against one threat while ignoring others. Threat modeling promotes comprehensive security coverage over random, haphazard, whack-a-mole defenses.

In addition to ensuring comprehensiveness, threat modeling includes the prioritization of threats and mitigations based on probabilities, business impacts, and costs of countermeasures. In other words, threat modeling requires both technical and financial calculations.

Since the late 1990s, several methodologies for threat modeling have evolved. Let’s review a few of the more popular techniques so that you can pick out which best suits your needs. Then you can follow the links to learn more and get yourself started improving security.

Attack Trees

In a 1999 Dr. Dobb’s article, Bruce Schneier popularized the idea of attack trees, which have developed into a mainstay of threat modeling.

In an attack tree, the root node represents an attacker’s objective, such opening a safe. Second level nodes represent how an attacker might achieve the objective, such as stealing the safe’s combination or blasting the door open. Leaf nodes indicate steps required to carry out the approach, such as purchasing dynamite.

In the tree, nodes may be joined by AND or OR logic, meaning that the attacker is either required to take each step or may choose between steps. In addition, the modeler may add attributes to nodes indicating difficulty level, attack cost, risk to the attacker, special equipment, and likelihood that any particular category of attacker might take this path. Based on the logic and attributes, the model determines the most likely attacks.

Because any given application might present an attacker with several attack goals, such as stealing PII or transferring balances between accounts, a complete threat model will require multiple trees. That is, a threat model is a forest of attack trees.

Many trees that a modeller creates will apply to multiple applications. An organization, therefore, should develop a library of attack trees to use with each new analysis.

Building attack trees promotes comprehensiveness. Given its simplicity and power, the technique has been incorporated into most popular threat modeling methodologies.

OCTAVE

The Carnegie Mellon Software Engineering Institute first published the Operationally Critical Threat, Asset, and Vulnerability Evaluation Framework (OCTAVE) in 1999. Unlike other methodologies that focus on specific applications, OCTAVE covers threat modeling of the information assets for an entire organization. OCTAVE is a comprehensive process description complete with stages, processes, inputs, and outputs.

Phase 1: Build Enterprise-Wide Security Requirements
Work with staff from multiple levels of the organization, identify information assets and their values in order to document security requirements.
Process 1: Identify Enterprise Knowledge (gather viewpoints of senior managers)
Process 2: Identify Operational Area Knowledge (gather viewpoints of operational managers)
Process 3: Identify Staff Knowledge (gather viewpoints of staff)
Process 4: Establish Security Requirements (integrate perspectives)

Phase 2: Identify Infrastructure Vulnerabilities
Associate assets to infrastructure and infrastructure to vulnerabilities.
Process 5: Map High-Priority Information Assets to Information Infrastructure (use staff knowledge to link assets to infrastructure, asset locations, and data flows)
Process 6: Perform Infrastructure Vulnerability Evaluation (associate infrastructure components with standard catalogs of intrusion scenarios)

Phase 3: Determine Security Risk Management Strategy
Create the final output document with risk analysis and mitigation plans.
Process 7: Conduct Multi-Dimensional Risk Analysis (estimate probabilities and impacts based on asset and vulnerability assessments)
Process 8: Develop Protection Strategy (select mitigation strategies based on costs and resources)

The OCTAVE process results in a comprehensive security risk management plan that covers a security strategy and continual risk management. It is appropriate to large enterprises seeking a threat model that applies across all applications and information assets.

Microsoft Security Development Lifecycle

Microsoft has defined a Security Development Lifecycle that includes threat modeling as well as other processes such as testing and incident planning. Microsoft’s threat modeling guidelines propose the following steps:

  • Identify assets. What should the system protect?
  • Create an architecture overview. Focus on trust boundaries, that is data flows from components owned by one entity to components owned by another entity.
  • Decompose the application into subcomponents to as low a level as practical.
  • Identify the threats. Use either STRIDE or a threat tree to aid in enumeration.
    • STRIDE is an acronym for spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. These broad vulnerability descriptions are not meant to be mutually exclusive, comprehensive categories, rather heuristics for enumeration. Examine each component, focusing on trust boundaries, and consider whether it exposes each of these vulnerabilities.
    • Attack trees, as described by Schneier, can substitute for STRIDE.
  • Document the threats.
  • Rate the threats. Microsoft recommends the DREAD model, an acronym for damage potential, reproducibility, exploitability, affected users, discoverability. Threats that rank high in each of these categories should receive a higher priority rating.

The comprehensive, prioritized list of threats serves as an input into a mitigation design process. The mitigation design culminates in a set of bug reports to implement mitigations. The Microsoft Security Development Lifecycle may apply to software producers as well as organizations creating their own custom applications. It could be used by startups as well as large enterprises.

Trike

After gaining experience with the Microsoft STRIDE method, the creators of Trike found it dull and repetitive. And as you might expect of software developers, they decided to formalize and automate: “The formalisms in the Trike methodology are designed to support automation to the greatest degree possible. These same formalisms also allow us to give strong guarantees which other, more ad-hoc methodologies cannot; specifically, that when we enumerate all threats against an application, we have in fact enumerated all possible threats.” To achieve this end, the methodology involves attack trees as well as state diagrams of all actions possible within a system.

While interesting in concept, perhaps the objectives were too ambitious. All signs of work on the project ended in 2012. This failure to automate suggests that threat modeling requires human judgment as well as a certain tolerance for tedium.

Process for Attack Simulation and Threat Analysis (PASTA)

While the Microsoft SDLC fits the needs of a software organization that releases products requiring security, PASTA very much applies to the information needs of large enterprises. Created by Tony UcedaVélez, a founder of an information security consultancy, and Marco Morana, an Information Security Strategist at Citi, PASTA focuses on risk management as a way to link information security to the financial concerns of executives.

The PASTA methodology builds upon the Microsoft SDLC, beginning likewise with a list of stages:

  • Define business objectives: What are the business objectives of the application and the business risk of breaches? Who are the proposed users and what are the use cases? What are the relevant governance and compliance standards?
  • Technical scope: What technology stacks do you have? What are the components? What comprises the infrastructure and third-party services?
  • Application decomposition: How do the components work together? What are the sequence diagrams and DFDs (Data Flow Diagrams)?
  • Threat analysis: Examine the threat landscape of your industry. Study available threat intelligence. Understand which threats are relevant to this application and their probabilities.
  • Vulnerability assessment: What is weak in the application? Study standards for vulnerability enumeration.
  • Attack enumeration: List the attacks that might take advantage of application vulnerabilities. Map exploits to DFDs.
  • Countermeasure development/residual risk analysis: Manage risks by mitigating the most probable and impactful threats and understand remaining risks. Develop cost-benefit analyses of mitigations.

Each stage consists of a set of activities with defined inputs and outputs and a RACI chart mapping roles within the methodology to roles within the enterprise, all culminating in a business-centric risk management proposal designed to win executive support. The methodology is appropriate to software development within large enterprises as it orchestrates the roles of the many stakeholders involved.

Conclusion

While the final result might be a list of bug tickets or a comprehensive risk management plan, threat modeling methodologies all share common themes: a systematic approach involving multiple stakeholders and techniques that aims at a comprehensive listing of threats, prioritizations, and mitigations.

As your organization or application might face unique threats or have special requirements, I would recommend pulling from each methodology whatever makes sense. Borrow, mix, and adapt. Most importantly, get started. There can be no comprehensive security without threat modeling, and you will improve with practice.

Posted in Security, Threat Modeling

A Twisted Tale of Web Security: Clickjacking and X-Frame-Options

I don’t use frames. So I’m safe, right?

With our focus on cranking out functionality, it’s easy to overlook the security implication of frames, especially if we’re building an application that doesn’t use frames. However, clickjacking is a serious security threat against any application regardless of its use of frames and really should be top of mind in any security review.

In a clickjacking attack, also known as UI redress, an attacker creates a malicious site to which he lures victims, perhaps with promises of free prizes. The attacker uses clever CSS with overlays to frame your page within the malicious site such that the victim does not realize that your site has been included within a frame. By clicking on a button for a prize, the victim unintentionally authorizes a transaction on your site, enabling fund transfers or fraudulent purchases. If the user is logged into your site at this time, the browser will send all of the cookies associated with your site along with the fraudulent request, including any session identifier, fooling your application into believing that this is a valid user request.

Oh, so you’re talking about CSRF?

If this sounds like Cross Site Request Forgery (CSRF), that is because the attacks do share certain traits. In both CSRF and clickjacking, the attacker takes advantage of a victim visiting the attacker’s malicious site while simultaneously logged into your site. The attacker exploits the browser behaviour of sending cookies to all requests to a particular domain regardless of whether the requesting page belongs to that domain. However, the typical CSRF defense of adding a random token to a hidden field of an HTML form will not work against clickjacking because in clickjacking it is your form that has been framed, including the anti-CSRF token.

How did we get into this mess?

Frames emerged like many aspects of the web–without standardization and without a thorough consideration of security implications. Netscape Navigator introduced frames in version 2.0 in March 1996. In a race to offer the most features, other browser vendors soon followed. At the time, Netscape proposed frames to the W3C for inclusion in the HTML 3.0 standard, but frames did not make it into HTML formally until 4.0 in December 1997.

There are two ways of using frames in HTML. Using a frameset tag, you can compose a web page entirely of frames. Or using an iframe tag, you can insert a frame anywhere within a page. Each frame can display a distinct page that can originate from any domain. A lot of ugly sites were created using framesets in the late 90s. Eventually, there were complaints that framesets hurt usability and accessibility, and in October 2014 the W3C obsoleted the frameset tag with the publication of the HTML5 standard. Of course, all major browsers still support the frameset tag for backward compatibility. And iframes, still in the HTML5 standard, are quite common on the web.

Early browser implementations of frames were not so dangerous. With big ugly borders, it was easy to distinguish one frame from another. But with the evolution of rich styling, the problem grew worse. Jesse Ruderman pointed this out in a bug report to Mozilla in 2002, but was ignored. It was not until 2008 when Robert Hansen and Jeremiah Grossman revealed a particularly disturbing exploit that browser vendors took notice.

Good grief. So how did this get fixed?

Even before browser vendors made any changes, web developers attempted to fix the problem with a technique referred to as frame busting: JavaScript code that changes the location of the page to the top level window if the page has been opened within a frame. Despite the cleverness of the code, attackers found ways to neutralize the defense by selectively disabling the frame busting code, which they could pull off because the attacker controlled the encompassing page. (Ironically, attackers achieved this in part by taking advantage of browser anti-XSS features.)

Among browser vendors, Microsoft was first to the rescue. In 2009, it introduced the X-Frame-Options HTTP header with Internet Explorer 8. Although there were already discussions under way regarding Content Security Policy, a more general solution that I’ve discussed in an earlier post, the standard had yet to be agreed upon, and the Microsoft IE team felt the problem needed an urgent fix. Other browser vendors followed, but standardization came only much later. It was not until 2013 that the IETF issued an informational standard documenting usage of the X-Frame-Options header.

OK, so how does this X-Frame-Options header work?

The X-Frame-Options (XFO) header takes one of three values: DENY, SAMEORIGIN, or ALLOW-FROM. DENY forbids the display of the page within a frame in all cases, clearly the most secure option. SAMEORIGIN allows the page to be framed but only by pages with the same origin, which is less secure because of differences in implementations. The ALLOW-FROM value is used to together with a specific domain, meaning that pages from the specified domain can frame the page. The ALLOW-FROM value supports only a single domain. Anything following the host name, that is after the slash, will be ignored. Obviously, this is the least secure XFO value, especially if the external domain is not completely trustworthy.

Unfortunately, there has been confusion and browser inconsistencies around multiple levels of embedding, that is when one page embeds another that embeds another and so on. Should the ALLOW-FROM domain match the embedding page or the initial page loaded by the browser or every embedding page along the chain.

So is this X-Frame-Options the end of the story?

The CSP standard includes directives for preventing a page from loading untrustworthy content as well as being loaded within an untrusted page.

The first version of CSP included a directive named frame-src. Version two deprecated this header in favor of child-src. Both directives limit content loaded into frames to a list of specified domains. Use both directives until all browsers support the newer standard.

More relevant to this discussion is the frame-ancestors directive. It defines which domains can embed the protected page within content-embedding HTML tags: frame, iframe, object, embed, and applet. Setting the directive to none is similar to setting X-Frame-Options: Deny. However, the CSP specification is clear on multiple levels of embedding. As its name implies, the frame-ancestors directive requires that every embedding page in the chain matches one of the domains specified in the directive.

What else might I do to prevent clickjacking?

Other than HTTP security headers, there are other common sense defenses relevant to both clickjacking and CSRF. Given that each attack works only when a victim visits an attacker’s site while logged into your site, try to limit the chances for this unfortunate coincidence. Time out sessions after a reasonable interval. Make it easy for users to log off. And require users to re-authenticate prior to transactions.

Which option should I choose?

In an ideal world, the CSP frame-ancestors directive would be sufficient. It is simple and well defined. But not all browsers support CSP. So the best option is all of the above. Add the X-Frame-Options header for browsers that do not support CSP, use frame-busting JavaScript for browsers that do not support X-Frame-Options, and limit session duration just in case.

Resources

Guidance on configuring web servers https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options
IE8 Story https://blogs.msdn.microsoft.com/ie/2009/01/27/ie8-security-part-vii-clickjacking-defenses/
IETF Informational RFC on X-Frame-Options https://tools.ietf.org/html/rfc7034
Other benefits of X-Frame-Options https://frederik-braun.com/xfo-clickjacking.pdf
OWASP Clickjacking Defense Cheat Sheet https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
Posted in Content Security Policy, HTTP Headers, X-Frame-Options

HTTP Public-Key-Pins (HPKP): Cut Out the Eavesdroppers

The foundation of trust upon which all web commerce depends begins with the browser. The browser trusts certificate authorities (CAs) and certificate authorities vouch for the identities of businesses. This system of public key infrastructure (PKI), however, is only as strong as the weakest certificate authority. If a trusted CA gets compromised, an attacker can generate rogue certificates, impersonate a site, and pull off a man-in-the-middle (MitM) attack. To reduce this risk, the IETF has proposed a new standard for HTTP public key pinning, HPKP, which is already implemented in Chrome, Firefox, and Opera.

According to the standard, an HPKP header instructs browsers to link a domain name to a restricted set of public keys. Based on this pinning, the browser should block any future content if the server’s certificate chain does not contain a matching public key. This protects all returning users but not first time visitors, since first time visitors would not yet have received the HPKP header. While not perfect, this trust-on-first-use (TOFU) approach is as good as you can get without actually exchanging keys in advance.

Here’s an example of an HPKP header (with line breaks inserted to aid readability):

Public-Key-Pins:
pin-sha256=”lb8Ox4AhFuA6Bg1GqqwQSgrTx6qdYXQjhUf4b+MAJPo=”; pin-sha256=”7HIpactkIAq2Y49orFOOQKurWxmmSFZhBCoQYcRhJ3Y=”;max-age=5184000;
includeSubdomains;
report-uri=”https://www.mydomain.com/myreporturl

Let’s examine this header one directive at a time.

pin-sha256

The mandatory directive pin-sha256 contains a Subject Public Key Information (SPKI) fingerprint. The standard defines this fingerprint as a base64-encoded SHA-256 hash of the SPKI section of a DER-encoded certificate. While the definition is quite a mouthful, the values are easily generated using the open-source command line tool openssl, which can be installed on Windows, Mac, and Linux.

openssl x509 -in mycert.crt -pubkey -noout | openssl rsa -pubin -outform der | openssl dgst -sha256 -binary | openssl enc -base64

This command in turn extracts the public key from mycert.crt, converts it into the binary DER format, takes the SHA-256 digest of the public key, and encodes that digest into Base64. The output of this command can be copied into the HPKP header.

Note that the SPKI section includes a certificate’s public key but no data unique to the certificate. This enables site owners to generate new certificates using the same public key such that the new certificate matches the pinning. This is critical since otherwise users protected by the pinning would lose access to your site if the original certificate were revoked.

The standard supports pinning either your site’s certificate or any certificate in the hierarchy between your certificate and the CA’s certificate. Pinning your site’s certificate provides greater security as it will protect you even if the root CA is compromised.

max-age

The mandatory max-age directive indicates in seconds how long the browser will enforce the pinning. The value represents a time-to-live that gets updated each time the browser processes a page with the HPKP header. A longer value reduces the risk of MitM attacks that could be exploited whenever the pinning expires; a shorter value reduces the risk of locking out users if something goes wrong with the certificate. The standard recommends a value of 60 days (5,184,000 seconds) as a reasonable balance.

Setting the value to zero causes the browser to remove the pinning information. The standard does not define a maximum value but leaves it up to browser vendors to do so.

includeSubdomains

The optional directive includeSubdomains applies the certificate pinning to all subdomains of whatever domain issues the header. Use this directive with care. Pinning subdomains could block access to subdomains that have distinct certificates not included in the pinning.

report-uri

And optional report-uri informs the browser to post reports of violations in JSON format to a specific path. The standard also supports an alternative report-only version of the HPKP header, Public-Key-Pins-Report-Only, which tells the browser not to block access but just report the violation.

Backup Pins

To reduce the risk of access denial, the HPKP standard requires at least two pin-sha256 values in a header. The concern is that if your private key were compromised or certificate revoked, no user who had visited your site would be able to visit again until the pinning expired. Fortunately, you do not need to buy backup certificates to create backup pin-sha256 values because openssl can create SPKI fingerprints from certificate signing requests:

openssl req -in myreq.csr -pubkey -noout | openssl rsa -pubin -outform der | openssl dgst -sha256 -binary | openssl enc -base64

Regarding backup keys, there are a few points to keep in mind:

  • Do not rely on a certificate in the same chain as a backup.
  • Do not to rely on a CA’s root certificate or intermediate certificate as a backup unless you can be sure to get your backup CSR signed by the private key behind that same certificate.
  • It’s best to rely on what you can control and hence use your own CSRs as a backup.
  • Use a separate public-private key pair to create the backup.
  • Keep the keys offline and secure. Don’t lose the backup.

Intercepting Proxies and Limitations of HPKP

In addition to shipping with a set of trusted root certificates, browsers permit users to install additional root certificates. Several types of software make use of this capability to intercept TLS traffic. These include anti-virus filters, parental controls, ad-blockers, and HTTP proxy tools such as Fiddler and Charles. These tools install root certificates that they use to generate certificates on the fly for each HTTPS request, intercept that request, and proxy it to the origin.

This technique can serve useful purposes, but it does create risk when implemented poorly. In the Superfish fiasco, Lenovo made the mistake of installing the same root certificate on many user machines together with the private key, making it possible for an attacker to use the certificate to launch a MitM attack against anybody with the certificate installed. Quite recently, Dell made this same mistake. In addition to such obvious certificate errors, makers of these products have made other errors in implementing TLS. PrivDog, an ad blocker, erroneously accepted invalid certificates. Others have implemented TLS in ways that enable attackers to downgrade TLS encryption levels. (See Hanno Böck for details.)

It certainly would be nice for HPKP to protect us from such folly, but as Chris Palmer argues, it is just not feasible. Consumers have chosen to install these products, an indication that consumers want the benefits they offer. Moreover, no user-level program such as a browser can protect us if our computer has been compromised by the installation of malicious or negligent code. Therefore, the HPKP standard calls for browsers to bypass HPKP checks when presented with a certificate signed by a locally installed root certificate. So while HPKP protects against rogue certificates out on Internet, it does not protect against those installed on our own computers.

Conclusion

Despite its unavoidable limitations, HPKP offers a valuable protection for users of supported browsers and is relatively easy to implement as long as back-up keys are properly managed. Why not take this simple step to close off an attack vector?

Resources

IETF HPKP Standard https://tools.ietf.org/html/rfc7469
Browser Support http://caniuse.com/#feat=publickeypinning
MDN Documentation https://developer.mozilla.org/en-US/docs/Web/Security/Public_Key_Pinning
OWASP Summary https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning
Reporting Tool for HPKP and CSP https://report-uri.io/
Tool for Analysing a Site’s HPKP Policy https://report-uri.io/home/pkp_analyse
Generate pin-sha256 Value for a Web Site https://report-uri.io/home/pkp_hash
Posted in Uncategorized

Start Secure: HTTP Secure Transport Security

As an experiment, please take a moment to type your bank’s URL into a browser.

If you are anything like the vast majority of users, you wrote the URL without specifying the protocol scheme, that is you typed mybank.com rather than https://mybank.com. Your browser, without seeing the protocol scheme, sent your request over HTTP. Upon receiving that request, your bank’s web server, knowing that it is insecure to communicate in plain text over an untrusted network, responded with a redirect telling the browser to resend the request over HTTPS. Safe enough?

No, not at all. This switch over from HTTP to HTTPS, so common in practice, forces users to swim unprotected through the shark infested waters of the internet before reaching the safety of encryption. If an attacker takes over, the communications might continue over an insecure channel without you or your user ever realizing it. Traffic sent over HTTP is vulnerable to man-in-the-middle (MitM) attacks, putting your users’ credentials at stake. If this sounds far fetched, go watch Moxie Marlinspike explain sslstrip and return to this post feeling very, very afraid.

How can we break the dependence of HTTPS on HTTP? One way is to change the browser’s behavior so that it communicates securely from the very beginning. This is precisely the purpose of HTTP Strict Transport Security (HSTS). This HTTP header tells the browser that it should always access your site over HTTPS. Even if a user types http://yourdomain.com, the browser will internally change this over to https://yourdomain.com for all content in your domain.

Here’s an example of an HSTS header:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

Note that max-age is a mandatory while includeSubDomains and preload are optional.

The max-age is a time to live in seconds that will be updated each time the browser receives a response with the header. Setting max-age to zero forces the browser to remove your site from its list of protected sites. The HSTS standard uses one year (approximately 31,536,000 seconds) as an example, which is a reasonable and widely used value.

Adding the includeSubDomains directive indicates that the browser should extend its protection to all subdomains of the current domain. While apparently straightforward, this setting requires some thought as there are a few gotchas.

Protecting subdomains could cause denial of access for any subdomains not supporting HTTPS or that depend upon resources not supporting HTTPS. So you’ll need to identify and test all subdomains before applying the policy.

Not protecting subdomains brings its own risks, most notably that of cookie hijacking. Depending on cookie settings, links from your protected domain to an unprotected subdomain could result in cookies being sent over HTTP, allowing an attacker to read the cookies. Given that cookies often contain session tokens, you’ve just enabled an attacker to bypass your login and access a user’s account.

Another risk of cookie hijacking occurs if an HSTS header protects a subdomain but not its parent. A trick to close this exposure is to include a reference to the root of your domain via an image tag (hidden via CSS) and including the HSTS header in the response, preferably with the includeSubDomains directive. But again, make sure to test all subdomains.

HSTS also protects users from themselves. Say a MitM attacker presents a fake certificate, one not signed by a trusted CA. Recognizing the certificate as invalid, the browser warns the user in big red letters and forces him to jump through a few hoops in order to proceed to the site. Despite the warning, the user thinks, hmm, that’s weird, but, hey, I’ll click through anyway because I’m short of time and want to get this done. Again, HSTS to the rescue. If the site is protected by HSTS, the browser will deny our reckless user any chance of clicking through.

We’re still left with a problem. What happens when somebody visits our site for the first time? Since the browser has yet to see our HSTS header, we’re back to starting the communications over HTTP. If an attacker intervenes at this moment, they’ve got us beat. So we need a way to tell the browser to access our site securely on the user’s first request.

HSTS preload has us covered here if we follow three steps. First, register your site with Google at https://hstspreload.appspot.com. This tells Google to hardcode HSTS protection for your site into the next update of Chrome. Internet Explorer and Firefox will follow suit by copying Chrome’s list. Second, add the preload directive to your HSTS token as in the example above. This signals your agreement to the HSTS preload registration. Third, assure that your site meets the requirements for HSTS preloading:

  • Assure that your certificate is valid
  • Direct all traffic to HTTPS
  • Serve all subdomains over HTTPS
  • Include the HSTS header on your base domain
  • Set the max-age to at least eighteen weeks (10886400 seconds)
  • Add the includeSubDomains directive to the header

Expect the mechanism for preloading to evolve. Listing all of the world’s domain names within each browser sounds less than feasible. It has only worked so far because not enough web developers have taken advantage of preloading.

While the header itself is straightforward, implementing HSTS does require going through a process to assure that your entire site works in HTTPS-only mode. Indeed, applying HSTS could be thought of as the final step in migrating away from mixed content, but it is key to assuring the effectiveness of HTTPS.

Resources:

IETF Standard: https://tools.ietf.org/html/rfc6797

Browser Support: http://caniuse.com/#feat=stricttransportsecurity

Google Preload: https://hstspreload.appspot.com

Migrating from Mixed Content: https://https.cio.gov/mixed-content/

Posted in HTTP Headers, Web Security

Content Security Policy: What are We Waiting For?

The Web Application Security Working Group of the W3C has created a standard for web security called Content Security Policy (CSP). Currently on version two with a version three in the works, CSP has been implemented by the majority of browsers and offers a strong defensive layer against XSS. Unfortunately, web developers seem reluctant to use the policy. Based on recent research, only about two percent of the most popular million websites include CSP. This is a shame as CSP not only offers easy implementation and powerful protection but its use promotes best practices in web development.

Cross site scripting (XSS) is a particularly thorny problem. Any site that displays data input by users opens itself to this vulnerability, making it possible for an attacker to run malicious code within the context of the site, code that can steal session tokens, carry out transactions, and cause all sorts of damage. It is possible to filter malicious user input to avoid XSS, but attackers have bypassed filters, leading to an arms race of ever more clever filters and bypasses. CSP provides significant protection against XSS and other threats, but not a panacea, at least not until all browsers implement the standard.

To mitigate XSS, CSP enables site owners to restrict browser behavior, enforcing a white list of content sources that the browser will trust, making it extremely difficult for an attacker to get their malicious scripts to execute.

Configuring CSP

Site owners configure this white list of trusted sources through a CSP policy defined in an HTTP header. The CSP HTTP header value is a plain-text, semicolon delimited list of directives. Each directive includes the directive name followed by a space-delimited list of paths. The header applies the policy to an individual page. Think of the page as a security context that encompasses all content within it. Depending on need, a developer can craft custom policies per page or use the same policy on every page.

Content-Security-Policy: script-src mycdn.org/scripts/ myserver.org/scripts/

This simple policy tells the browser to only execute scripts downloaded from mycdn.org/scripts/ and myserver.org/scripts/, but not from any other sources. Even if an attacker managed to insert a script tag pointing to evilempire.com/maliciousscript.js, the browser would refuse to execute it.

CSP also supports several useful keywords. For example, adding the keyword ‘self’ to the script-src directive enables browsers to execute scripts from the site itself.

Eliminating Inline Scripts and eval()

Most importantly, the browser will not execute inline scripts, which prevents the vast majority of XSS attacks. The notion of inline here covers both script blocks and event handlers within HTML element attributes. While this restricts developers, it really just promotes the best practice of unobtrusive JavaScript, meaning keeping a clean separation between HTML and JavaScript by moving all JavaScript into script files. Event handlers can be easily added in code rather than through HTML attributes.

While the standard provides a way to override this restriction by adding the ‘unsafe-inline’ keyword to the script-src policy, this really should be avoided as it effectively eliminates almost all of the value of CSP.

CSP version 2 offers an additional alternate for inline script. It supports inline script blocks with a nonce tag that matches a nonce value in the script-src directive. Several such blocks could be added to a page. Again, this is less secure than avoiding inline scripts all together.

Content-Security-Policy: script-src nonce-WFNQcIGJSOCnmt00THCK

< script nonce="WFNQcIGJSOCnmt00THCK">
/* inline script that for some reason cannot be moved to script file */
</script>

CSP eliminates yet another vector for XSS attacks by preventing browsers from executing the eval function even when called within permitted content sources. The eval function executes strings, which poses a risk if the string contains any user input. Again, CSP does allow bypassing this protection by adding the ‘unsafe-eval’ keyword to the script-src directive, but this likewise is discouraged. (The limitation on eval applies as well to other JavaScript functions that can execute strings such as setTimeout, setInterval, and Function.)

The CSP Directives

CSP supports several other directives in addition to script-scr that can further lock down a site to prevent potential attacks.

style-src Allowed sources for style sheets.
img-src Allowed sources for images.
connect-src Allowed paths for XHR and WebSocket requests.
font-src Allowed sources for fonts.
object-src Allowed sources for plugins such as Flash.
media-src Allowed sources for audio and video.
frame-src Allowed sources for loading frames. Deprecated in version 2 of the standard. Use child-src.
child-src Allowed sources for loading frames. New in version 2.
frame-ancestors The sites allowed to load the page as a frame. Applies to <frame>, <iframe>, , and <applet> tags. Useful to prevent clickjacking.
default-src Default values for all directives ending in -src. The default is overridden whenever a specific src directive is specified.
base-uri Limits which URLs can be used in a page’s <base> element.
form-action Allowed paths for action attribute of <form> tags.
plugin-types The types of plugins that the browser may load for the page.
sandbox Imposes iframe-like sandbox on the page, preventing popups, plugins, scripts, and forms.
upgrade-insecure-requests Instructs browser to convert HTTP links into HTTPS.
report-uri A url to which the browser will post client-side policy failures.

Meta Tag

In addition to specifying CSP policies via HTTP headers, the standard supports defining policy in meta tags as in the example below. However, this approach is less secure since an attacker who manages to alter page content might find a way to alter the meta tag to enable their own exploit. Moreover, the policy will not apply to any content preceding the meta tag.

<meta http-equiv="Content-Security-Policy" content="script-src mycdn.org">

Path Matching and Wildcards

Path matching rules are quite straightforward. A domain name without a path, such as mydomain.com, matches all files served from the host. A path ending in a forward slash, such as mydomain.com/scripts/, matches all files within that path and its sub paths. Paths that do not end with a slash, such as mydomain.com/scripts/script.js, match only the one specific file.

To make policy definitions more flexible, the CSP standard allows for wildcards in paths.

* Matches all content sources other than blob:, data: and filesystem:
*.mysite.com Matches any subdomain of mysite.com
*://example.com Matches any protocol scheme
example.com:* Matches any port (without the * the path by default matches the standard port of the protocol, meaning either 80 or 443)
https: Matches any domain but only with protocol scheme HTTPS

Defense in Depth

CSP should be considered an important new layer in a defense in depth strategy rather than a single solution for web security. XSS remains a threat for users of browsers that do not implement the standard. Moreover, you can never be sure that user input saved through one web application is not resurfaced through another one of your organization’s sites. Continue to apply XSS filters to all user input both when stored and displayed.

Implementation

CSP is easy to implement for new sites. Begin with the strictest policy and ease up only as needed.

For existing sites, particularly larger, more complex sites, gaining the full benefit of CSP might prove difficult, especially if there are inline script blocks, attribute-based event handlers, and resources pulled from many sources. CSP supports an additional HTTP header, Content-Security-Policy-Report-Only, that eases the pain of finding policy violations within your site. Add this header rather than the standard CSP header so the browser will post each violation to the URL specified in the report-uri directive, enabling you can track and resolve. (See https://cspbuilder.info and https://report-uri.io for CSP violation tracking tools.) For guidance, read about the experiences of Twitter and Dropbox.

Supporting Tools and Libraries

Adding CSP headers to pages is straight-forward as all popular web servers and server-side web frameworks provide mechanisms for adding HTTP headers to responses. Here is a simple example in Asp.Net:

HttpContext.Response.AddHeader("Content-Security-Policy", "scipt-src ‘self’");

In addition, there are libraries available focused on HTTP security-related headers that include support for CSP.

Asp.Net NWebsec https://www.nuget.org/packages/NWebsec
Django Django-CSP https://github.com/mozilla/django-csp
Java headlines https://github.com/sourceclear/headlines (mutliple security headers)
Salvation https://github.com/shapesecurity/salvation (specific to CSP)
Node.js helmut https://www.npmjs.com/package/helmet
PHP php-csp https://github.com/Martijnc/php-csp
Ruby on Rails secureheaders https://github.com/twitter/secureheaders

Making it even easier, the site http://cspisawesome.com provides an online tool for generating CSP headers. And my colleagues at Shape Security have created an online validator at https://cspvalidator.org to assure you’ve gotten your policy correct. So what are we waiting for?

Resources:

Supported Browsers http://caniuse.com/#feat=contentsecuritypolicy

Presentation by Adam Barth, https://youtu.be/pocsv39pNXA

Reference Guide http://content-security-policy.com/

Mozilla Reference Guide https://developer.mozilla.org/en-US/docs/Web/Security/CSP/CSP_policy_directives

OWASP Summary https://www.owasp.org/index.php/Content_Security_Policy

Tutorial by Mike West http://www.html5rocks.com/en/tutorials/security/content-security-policy/

Dropbox’s Experience https://blogs.dropbox.com/tech/2015/09/on-csp-reporting-and-filtering/

Twitter’s Experience https://blog.twitter.com/2011/improving-browser-security-with-csp

Posted in HTTP Headers, Web Security

The RESTFulness of Single Page Applications

The frequent pairing of Single Page Applications and RESTful APIs goes well beyond the fact that both have become popular buzzwords. While the architectural styles of REST and Single Page Applications (SPAs) do not depend on each other in principle, it would be difficult to imagine either succeeding without the other in regard to the development of web applications

The reliance of SPAs on REST is perhaps the most immediately apparent. To retrieve and manipulate data, SPAs need to make a great many calls to a server using means other than the traditional pattern of page refreshes. SOAP web services have long been and remain an option. And, indeed, the JavaScript XMLHttpRequest object can return data in a wide variety of formats. However, crafting large, complex SOAP requests and parsing their XML-based results in JavaScript is both complex and error-prone. Building a useful JavaScript library for SOAP generally means making it specific to a single SOAP API. As a web developer, I tend to avoid making SOAP requests from JavaScript, instead making such calls form server-side code and sending the results to the browser through some other mechanism.

In contrast, making calls to RESTful APIs and parsing the JSON-formatted results is a pleasure. jQuery makes it all trivial. And while each SOAP API must be learned anew, RESTful APIs tend to be intuitive and predictable. As a practical matter, RESTful APIs have made the notion of an SPA feasible to a great many more development teams.

Interestingly, it is also true that SPAs make it feasible to build rich web-based applications within a RESTful architecture. While we tend to think of REST as a style of API, that is as an alternative to SOAP, REST is more than that. It is an architectural style defined by a set of six constraints, constraints intended to promote scalability and maintainability. If the architecture of a specific application adheres to these constraints, we may call it RESTful.

Let’s review these constraints:

1)    Client-Server. A REST architecture demands a separation of concerns between UI and data storage. This improves scalability by simplifying the server role and enables components to evolve independently.

2)    Stateless. Each request from client to server must contain all of the information necessary to process the request. The server may not store any session data on behalf of the client, rather the client must store all session data.

3)    Cache. Server responses must be labelled as cacheable or not cacheable.

4)    Uniform Interface. Components must communicate through uniform interfaces to decouple implementations from the services they provide.

5)    Layered System. Each layer in the system may not see beyond the layer to which it is immediately connected, providing for the possibility of many intermediate proxy servers without a dramatic rise in complexity.

6)    Code-on-Demand. Clients may download and execute scripts from a server.

In the article Principled Design of the Modern Web Architecture, the founder of REST, Roy Fielding, and coauthor, Richard Taylor, emphasize the importance of statelessness as one of the key constraints, explaining that it provide the following benefits:

  • Visibility improves because monitoring solutions do not need to look beyond a single request to understand its context.
  • Reliability improves as it is easier to recover from temporary server failures.
  • Scalability improves as servers may more quickly free resources and avoid the burden of managing session state.
  • Statelessness allows intermediary devices to understand requests in isolation, providing caching devices all of the information necessary to determine the reusability of a cached response.

This stateless constraint poses a challenge for application developers. Web applications typically require some form of session state—who is the user, where is he in the process, what are his preferences, what choices has she made. Because data stored within browsers is lost with each page refresh, server-side frameworks (ASP, ASP.Net, JSP, PHP, Ruby on Rails, Express.js for Node) manage session state on the server, generally by embedding a key with each request (via a query string parameter or cookie). The framework uses the key to retrieve session data (saved either in server memory, a database, or a distributed cache) and presents it for consumption by the developer’s code through the framework’s API. The stateless constraint of REST rules out this common pattern, which is used by a great many applications on the web today.

If REST rules out server-side session management and the browser loses session data with each page refresh, the options for RESTful web development narrow considerably. Yes, HTML5 provides for local storage capabilities, but these are not yet universally available in all browsers. The more reliable option is to avoid page refreshes, which means building SPAs. As long as a single page remains loaded in the browser, a developer may store session data in JavaScript objects. The question then becomes how do we manage session data within SPAs, which is an issue address by the emerging client-side frameworks, but that’s a topic for another day.

In short, REST and SPAs make each other feasible.

Tagged with:
Posted in REST

The Many Ways of Escaping: Dust.js Filters

Dust.js is a powerful JavaScript templating framework, currently maintained by LinkedIn, that can function both in the browser and server-side. At the moment, I’m working on an introductory presentation on Dust.js for the Silicon Valley Code Camp. And as I was working my way through the documentation, I found the coverage of filters rather sparse, so I spent some time playing with them and decided to write up what I found.

Dust.js templates work by binding a JavaScript object to a template in order to render output. The most basic element within a template is a key, which is a tag such as {name}. When bound to a JavaScript object that has a property called name, Dust.js rendering will replace the tag within the template with the value of the name property.

But what if that value derives from untrusted user input, or contains HTML mark-up, or needs to be included within a URL? In those cases, Dust.js provides a technique known as filters to escape the value.

Filter syntax is simple, just add the pipe symbol and a letter representing the filter to the end of the key tag. For example, {value|h}.

Lets now walk through each filter assuming in each case that we are passing to the Dust.js render function the object specified below:

var obj = {value: “<p><script>alert(‘xss attack’)</script></p>”};

To keep things interesting, I’ve included in the string property called value both html markup and JavaScript.

Default

If you write a key tag without specifying a filter, Dust.js will apply the default filter. The default filter replaces the angle brackets of html tags with html entities for proper display in the browser. This has the effect of thwarting cross site scripting attacks, as the browswer will render rather than execute the JavaScript. The default filter also replaces quotation marks denoting strings with JavaScript with html entities, again reducing the risk of xss attacks.

Template: {value}

Text Output:

default output

Browser Display:

default browser display

In most cases, the default will be the best choice, but it is important to understand the other filtering options for the special cases in which they make sense.

HTML Escaping |h

The html escape filter renders content with html markup such that it appears in the browser as escaped html. It basically double escapes the html. As a result, the ampersands in the default output are replaced with their html entity equivalent (&amp;);

Template: {value|h}

Text Output:

h output

Browser Display:

h browser display

JavaScript Escaping |j

The JavaScript escaping filter renders content so that it can safely be used within a JavaScript string. The escaping prevents the output from breaking out of quoted strings and from breaking the closing </script> tag. This looks like the default filtering, except that the quotation marks are escaped with back slashes.

Template: {value|j}

Text Output:

j output

Browser Display:

j browser display

URL Escaping  |uc, |u

URL escaping renders content usable within a URL by replacing all characters not allowed by the URL specification with % and their ASCII hexadecimal equivalent code.

The |u filter passes the content through the JavaScript method encodeUR(), and the |uc filter passes content through encodeURIComponent(). The method encodeURI() expects a URL that begins with http:// and will not escape this section of the URL. In contrast, encodeURIComponent() will escape all of the content.

Template: {value|uc}

Text Output:

uc output

Suppress Escaping |s

The simplest filter to understand is suppression (|s), which turns off escaping. This allows even JavaScript code to execute as part of the page, a risky approach that should be used only with fully trusted content. Adding the output of the following tag to a page will result in an alert window.

Template: {value|s}

Text Output: <p><script>alert(‘xss attack’)</script></p>

Brower Display:

s browser display

Posted in Dust.js, JavaScript

My Node.js Misperceptions

After recently studying and playing with Node.js, I realized that I had somehow picked up a couple of misperceptions that were wildly off base:

  • Node.js is solely a platform for web development similar to one of the other popular server-side web development platforms, perhaps PHP or Ruby on Rails.
  • Node.js is innovative only in that it brings JavaScript to the server.

To clarify misperception one, while Node.js provides powerful support for web development, Node.js can be used to build any manner of application. Node.js applications may access file systems, networks, databases, and child processes. Just as with Python and Ruby scripts, you can run Node.js scripts from a command line and can pipe streams back and forth to other applications in the Unix style.

My second misperception turned out to be doubly wrong.

First, Node.js does not break new ground in bringing JavaScript to the server. Netscape started on the Rhino project back in 1997, a JavaScript engine implemented in Java and capable of running outside of the browser. Even Microsoft’s ASP web framework supported server-side JavaScript, a fact that I had long ago forgotten. And to top it off, Node.js does not actually execute JavaScript, for that it relies on Google’s V8 JavaScript engine.

Second, Node.js is indeed very innovative, but its innovation lies not in its server-side implementation of JavaScript, but in how it handles concurrency—the situation wherein new requests arrive before earlier requests have completed.

Of course, this is an old problem, and all modern operating systems offer a built-in solution—multi-threading. A multi-threaded server makes a system call to generate a new thread for each request that it receives. That thread handles the request to completion while other threads handle other requests. The OS manages all of these threads, sharing out slices of CPU cycles to each as it is ready.

Obviously, multi-threading still works as it has for decades, but it does have its problems when it comes to massively scalable web applications. Managing threads is not free. Each has its own associated process, registry values, program counter, call stack. As the number of threads increases, the OS spends more and more CPU cycles managing rather than running the threads. And with web applications, threads spend the vast majority of their time waiting for IO, whether that be calls to files, databases, or network services.

Node.js takes a radically different approach, avoiding the need for OS threads by simply refusing to wait. Rather than making blocking IO calls, wherein the thread stalls waiting for the call to return, almost all IO calls in Node.js are asynchronous, wherein the thread continues without waiting for the call to return. In order to handle the returned data, code in Node.js passes callback functions to each asynchronous IO call. An event loop implemented within Node.js keeps track of these IO requests and calls the callback when the IO becomes available. Managing the event loop costs less than managing multiple threads, as it only requires tracking events and callbacks rather than entire call stacks.

Each Node.js process is single threaded, but it can be scaled to multiple processes and multiple machines just as traditional multi-threaded servers.

One might also argue that Node.js as a platform for server-side web development is innovative in its lack of abstraction; the Node.js programmer handles http requests by forming http responses (what the http protocol is all about), rather than creating pages (PHP, JSP, Asp.Net) or writing models, views, and controllers (Ruby on Rails). Personally, I have a preference for lighter frameworks or at least frameworks that do not force me into certain patterns so I certainly find Node.js appealing despite my initial misperceptions.

Posted in Node.js

JavaScript Templating as a Cross-Stack Presentation/View Layer

At last night’s Front End Developers United meetup in Mountain View, Jeff Harrell of PayPal and Veena Basavaraj of LinkedIn spoke about the use of Dust.js within their organizations. LinkedIn and PayPal chose Dust.js from among the many JavaScript templating frameworks for reasons that Veena describes in her post Client-side Templating Throwdown. But the real interesting point that both speakers made was not so much about Dust.js, but how standardizing on a JavaScript templating framework across different stacks accelerated development cycles.

The presentation layer in web development generally relies on some sort of templating, which mixes html markup with coding instructions. Traditionally, this is done through a server-side technology such as PHP, JSP, or ASP.Net. More modern server-side MVC frameworks such as Ruby on Rails use templating in the view layer, either Erb or HAML.

With the rise of single-page applications and JavaScript MV* frameworks, JavaScript templating frameworks are growing in popularity. When using AJAX without an MV* framework, one would make an AJAX call to retrieve data from the server in JSON format, insert it into a template, and display the resulting HTML. When using an MV* framework (and the observer pattern), updating a model automatically causes associated views to render using the updated model and a template. One of the most popular MV* frameworks, Backbone.js, uses Underscore.js templating by default but could work just as well with others. (Veena’s post referenced above lists many of the available JavaScript templating frameworks.)

In large organizations that have been around for some time, there are bound to be multiple platforms in use, some representing a future ideal state, others representing legacy code. PayPal runs on C++ and Java and is moving increasingly toward Node.js. And, of course, most web applications use JavaScript in the browser to provide user interactivity. What this means is that an organization could end up using several templating frameworks. Similar content gets rendered using different templates, a situation that risks inconsistency in the user experience.

One possible answer to this dilemma would be to standardize on a single stack, but this is not possible for a number of reasons. Neither PayPal nor LinkedIn can run all of their code in the browser nor could they not run any code in the browser. For security reasons, not all code can run using JavaScript in the browser;  for user experience reasons, some code must run in the browser. Moreover, migrating a complex code base from one platform to another is a major undertaking.

Instead, PayPal and LinkedIn chose to use Dust.js as a common templating framework across all stacks. Directly using Dust.js in the browser or within Node.js poses no problem as it’s a JavaScript framework. From Java code, Dust.js can be accessed through Rhino, a Java-based implementation of JavaScript. And C++ code can access Dust.js through Google’s V8 JavaScript engine. Both Rhino and V8 can be run on the server. What this means is that the entire code base can make use of the same templates, assuring a more consistent user experience and more rapid development cycles.

Since only JavaScript runs in the browser, only a JavaScript-based templating framework could have achieved this unity.

Thanks to Veena and Jeff for the great presentations and to Seth McLaughlin of LinkedIn for organizing the event.

Posted in Dust.js, JavaScript

Will NoSQL Equal No DBA in Enterprise IT?

During the recent CouchConf in San Francisco, Frank Weigel, a Couchbase product manager, touted the benefits of schemaless databases: without a schema, developers may add fields without going through a DBA. The audience of developers seemed pleased, but I wondered what a DBA might think.

Based on my experience, the roles of DBAs in enterprise IT vary. Some engage in database development. Others focus exclusively on operations. But DBAs generally own production databases; all production schema changes go through them. This control would go away under NoSQL. In the schemaless world of NoSQL, developers have more power and more responsibility. Through their code, developers control the schema, access rights, and data integrity.

What is left for DBAs? There is always operations: monitoring performance and adjusting resources as needed. But these are general system administration tasks, not necessarily requiring experts trained in relational databases. So will NoSQL put DBAs out of work? Will the DBAs, with their cubicle walls covered in Oracle certifications, stand up against this invader?

Let’s not get ahead of ourselves. I’ve not seen any evidence to suggest that NoSQL is taking hold in enterprise IT. With the exception of large, customer-facing systems, enterprises do not need the scalability promised by NoSQL. And many key enterprise systems require transactions that span records, a feature lacking in NoSQL systems. While document-oriented and graph databases might fit well for certain use cases, the value proposition for NoSQL in the enterprise has yet to become compelling.

And despite the disadvantages of fixed schema for developers, the fixed schema of relational systems has a larger value within enterprises. Ideally, these schema define in one location the rules of data integretity, which makes it easier to audit all changes.

While I could foresee conflicts between developers and DBAs over NoSQL in enterprises, that does not need to be the case. Most enterprise systems will continue to run on relational databases. The DBA role of managing these systems will not go away anytime soon. If enterprises do see any value in NoSQL for certain use cases, they should define policies around which sorts of applications might use this new technology. NoSQL, as many have commented, may well stand for Not Only SQL, in which case it can coexist with DBAs within the confines of corporate policies and procedures.

Posted in NoSQL
%d bloggers like this: