URLS and query parameters aren’t secure. They should never contain sensitive or important information (passwords, static shared secrets, private information, etc). It is asking for trouble, something we here at FullContact have discovered first-hand.
Recently, a security researcher came to us with 75 of our customer’s API keys, and noted that they could get many more with a vulnerability they had found.
Security is a very high priority for us. We’re a contact management company and we’re responsible for people’s private contact information. So naturally, this incident put several of our engineers on high alert. If this researcher can get access to 75 API keys, could they have deep access to one of our systems?
The Good News: The researcher never had access to our systems; in fact, there was no direct vulnerability in our servers or code.
The Bad News: The vulnerability did exist, and was through a vector very few people think about.
The central cause is that the FullContact Person API was designed to be simple to get started with: no coding required, just paste a URL into your browser and start looking at our data. This is made possible by allowing our API key to be passed in as a query parameter as part of the URL. For example:
Unfortunately, putting authentication and secrets in URLs and HTTP query parameters comes with a surprising and subtle security cost.
Some web analytics companies aggregate and record traffic across the web, and then sell analytics based on that traffic. One of their major sources of this data is browser extensions that have access to their users’ internet activity. Ostensibly, these extensions have access to browsing history and your tabs so that they can do something useful with it. But there are some that sell this data to web analytics companies.
Although these analytics companies claim to only display anonymized data, many have premium offerings that allow customers to see individual popular requests made to a domain. To be fair, this usually doesn’t include clearly identifying information, like IP address, but it will include the full URL of the request, including any sensitive query parameters like API key.
This security researcher acquired individual requests for our API site using a premium data service offered by a web analytics company. Of course, some of those requests included a customer’s API key as a query parameter.
It’s not really a surprise that this web analytics company got a hold of that data. Some of our customers (or their developers) were most likely testing their API key in their browser before integrating our API into their stack. What they didn’t know was that one of their browser extensions (or some similar source) was spying on this and sending off every GET request they made to web analytics companies.
This doesn’t just apply to secrets like API keys in query parameters. We also found URLs for internal company systems and admin pages (of course, inaccessible to the open internet). If an employee at a company has a snooping plugin installed, they are sending off the URLs of internal company pages to these analytics companies. A hacker with premium data offerings effectively has network mapping on steroids: they are able to see a bunch of internal URLs that may be accessible via the open internet or once gaining access to a server on the edge of a network.
This is not strictly the fault of our code, but it is the fault of our design. This vulnerability would be costly to exploit (requires premium access to web analytics services), and the scope is limited to a subset of users with snooping browser extensions, but we want to be as careful as possible.
We are currently switching out the keys of affected customers and recommending that all authentication to our API happen through HTTP headers (which are not included in HTTP urls and thus aren’t as vulnerable). Future generations of our APIs will not allow authentication using query parameters. This particular web analytics company has also stopped serving out data for our API domain.
What you SHOULDN’T do
The reality is that URLS and query parameters aren’t secure. They should never contain sensitive or important information (passwords, static shared secrets, private information, etc). It is asking for trouble, especially when browser spyware gets involved.
Here’s why query parameters are unsafe:
- They get saved in browser history. This means malicious code could sweep through a user’s browsing history and extract passwords, tokens, etc. Other users of the same browser/computer could also view this information.
- They’re probably saved in your server’s logs and memory. Getting access to your customer database might be hard, but vulnerabilities in your web servers that might allow viewing logs/memory are much more widespread (for instance, the infamous Heartbleed bug). Servers will often log/save query parameters for requests for a long time, but headers are much less widely stored. It’s safer to have your servers touch and record sensitive information like this as little as possible.
- Users might post the link, not realizing what they’ve shared. We’ve had plenty of cases at FullContact where a customer asks us to switch out their API key because they’ve accidentally publicly shared a link to our Person API with their API key attached. If authentication never happens through query parameters — and thus never appears in the address bar of the browser — this mistake is impossible to make.
- This information will be exposed in the “referrer” header. Since we’re an API, this isn’t really an issue for us. But consider a webpage like
“mywebapp.com/login?username=bart&password=abc123”. If the browser needs to make a request to another domain to render this webpage (for instance, to download an image), a header will be included:
“Referer: mywebapp.com/login?username=bart&password=abc123”. If the requested URL isn’t in your domain, who knows what that other website could be doing with that header?
- They’re available to browser extensions. This is the whole reason for this blog post — browser extensions can see query parameters from any site (if the user gives them permission) and use them however they like. Headers, cookies, POST bodies, etc. are only available to browser extensions on certain domains that the user explictly allows.
What you SHOULD do
Use an alternate method for authentication over HTTP. Using HTTP Headers is probably the most standard way. For instance, the FullContact Person API has allowed supplying HTTP headers for a long time now:
“X-FullContact-APIKey: your-apikey-here”. Many other APIs allow the same with Basic Auth headers.
For maximum security for enterprise clients, mutual authentication is probably the right approach. Mutual authentication is an authentication scheme that guarantees that the client is talking to a server it knows, the server is talking to a client it knows, and that all their data will be completely encrypted. It adds an additional step to a TLS handshake in which a client also provides a certificate that the server verifies as trusted before allowing the connection. At FullContact, we fully support mutual authentication for customers that desire such security. Once configured, a client key cannot be used unless the originating server presents a valid certificate which is provided by FullContact.
The HTTP security model has a lot to say about how to protect many forms of sensitive data, but query parameters are not among them. Since they’re typically included as part of the URL (and appear in browsers’ address bars), they’re liable to be recorded, cached, and exposed in ways most other web traffic data is not. When you have to deliver sensitive information over HTTP, put it in headers, POST bodies, and the like — or you’ll eventually get burned.