Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Don't overlook URL fetching agents when fixing Heartbleed flaw on servers, researchers say

Lucian Constantin | April 14, 2014
TLS clients are also vulnerable to Heartbleed memory leaking attacks, including server-side applications that fetch user-supplied URLs

Website operators should assess their whole Web infrastructure when patching the critical Heartbleed flaw in OpenSSL, otherwise they risk leaving important components open to remote attacks, despite fixing the problem on their publicly facing servers.

The development team at Meldium, a cloud account management and monitoring service, warned that some URL parsing agents that are functionally important for websites and support TLS (Transport Layer Security) connections can also be attacked through the Heartbleed vulnerability to extract potentially sensitive data from their memory space. That's because the flaw doesn't affect just TLS servers, but also TLS clients that use vulnerable versions of OpenSSL.

A lot of the attention has been given to the primary Heartbleed attack scenario where a malicious client attacks a TLS-enabled server to extract passwords, certificate private keys, cookies and other sensitive information, but the vulnerability also enables servers to attack clients and steal information from their memory. The Meldium team refers to this as a "reverse Heartbleed" attack.

TLS clients can be obvious things like browsers or other desktop and mobile applications, but can also be any server-side application or script that establishes connections to HTTPS URLs. If attackers are able to force those agent-type applications to fetch URLs from servers they control, they can launch reverse Heartbleed attacks against them.

In a complex Web infrastructure, URL fetching agents could run on internal servers that are behind the usual security perimeter and are treated as less of a priority by administrators in the patch deployment process. The problem is that if they access URLs supplied by users, such applications can be attacked remotely, regardless of where they run inside the infrastructure.

"If you can direct some remote application to fetch a URL on your behalf, then you could theoretically attack that application," the Meldium team said in a blog post Thursday. "The web is full of applications that accept URLs and do something with them."

Some examples include: agents that parse URLs to generate previews or image thumbnails; scripts that allow users to upload files from remote URLs; Web spiders like Googlebot that index pages for search; API (application programming interface) agents that facilitate interaction and interoperability between different services; code implementing identity federation protocols like OpenID and WebFinger; or webhooks and callback scripts that ping user-specified URLs when certain events happen.

"The surface of exposed clients is potentially very broad -- any code in an application that makes outbound HTTP requests must be checked against reverse Heartbleed attacks," the Meldium team said.

Depending on what functionality the URL-fetching agents are designed to support, their memory might contain sensitive information.

The Meldium team created a reverse Heartbleed exploit and tested various sites that had already patched the vulnerability on their perimeter servers.

 

1  2  Next Page 

Sign up for CIO Asia eNewsletters.