Skip to main content

Security - GET and POST

In today post we will talk about GET and POST from a security perspective. We will try to identify why we should use POST and not GET in different situations and when it is the same thing using GET or POST.

Why I see this topic so important?
At different security reviews or penetration tests I see recommendations where GET is not recommended.

GET and POST overview
The main difference between GET and POST is the way how parameters are send. When using GET, all parameters are send in the query string and are visible. In contrast, when using POST, the query parameters can be added to the body of the message, that is not directly visible.
GET:
/playground?name=Tom&age=20
POST
POST /playground
Host: foo.com
name=Tom&age=20
As we can see in the above example, the parameters can be found in the body of the request, not in the query string.
If we take a look on w3schools we will notify that the main differences between GET and POST are:
Feature GET__ POST
Can be cached Yes No
Remains in browser history Yes No
Can be bookmarked Yes No
Has length restrictions Yes No



It is pretty clear from the above table, that GET can be used with success when we need traceability, bookmark and navigation features. Especially when we what search optimization. POST is great for scenarios when we need to submit data, especially because a browser will not allow automatically to submit the same request again (POST) without asking the user one more time about this.

GET and POST over HTTPS
Over HTTPS both requests are secure as long as the tunnel between the two endpoints is not corrupted. Even if over GET, the arguments are send over query string, the request itself is encrypted. The only thing that can be seen from outside is the request endpoint (IP). Everything else is encrypted.
Does this mean that GET is secure? No. It means that over a secure channel like HTTPS, it is almost the same thing if we use GET or POST.

Proxy and 3rd party listeners 
All traffic over GET (and HTTP) can be logged and cached by any listeners. Even if we don't expose sensitive information from query parameters, a system/person could understand the system and what are the weak points of the system. He can be in a discovery mode and identify what are the endpoints that are exposed, what kind of features are exposed over them and what kind of values are accepted by each endpoint.

Web Accelerators
In general a web accelerator will click and navigate to all GET requests by default. If we expose over GET an insert of delete command that we can end up with a behavior that we really don't want.
For POST a web accelerator needs to be prefetched with data.

Cache
Beside browsers, there are other systems between clients and servers that can cache data. The best example in this case is a reverse proxy that can cache and server from his own cache all GET requests.
If we don't want data to be cached, we could say that changing the requests to GET can be a better solution, but it is more simple to specify in the GET requests that we don't want to cache that specific resource.

Browser history over HTTPS
Even if we are using HTTPS, all requests that are send by a browser will appear in the browser history. It is important to know that this requests (URL and query parameters) are visible only in the browser. From outside the system all content is encrypted and query parameters cannot be sniffed.

Secure over HTTP
Over HTTP, both requests are the same from a security perspective. All the content is in clear text and can be accessed by anybody (sniffed). Over HTTPS all the content above TCP/IP level is encrypted. The good part is that there are more and more browsers that accept POST only over HTTPS.

Alter Requests
It is true that it is more easier to alter GET requests, because we only need to change the query parameters. But it is not complicated to alter a POST requests, if we take into account that nowadays almost all browsers have a 'developer' plugin build-in.

Malicious Links
Usually malicious links are send over GET requests. It is pretty hard to send an link in the email that will trigger a POST to a specify site.

Conclusion
Yes, it is true that over GET more information are visible (including the endpoint), but discussion is not all the time around security. Let's imagine that we have a proxy between browser and server. Even if we are doing a POST and parameters are in the body, nobody stops the proxy to not access the content from the body. If the content is encrypted over HTTPS then the proxy will not be able to access this information (even if we are over GET or POST).
When you don't know if you should use GET or POST ask yourself if the information that is send can be shared with others or not alter data. If yes, then GET could be a solution for you. Otherwise, POST is your best friend.

Post are more secure if we are thinking from a security perspective on the computer from where the request are done, but over the wire there is no difference.

Comments

  1. A useful list.

    Anyway, the first criteria when choosing between GET and POST is their original intended use:
    - GET - only data retrieval, no side effects
    - POST - add new resource, or (when PUT can't be used), modify an existing resource
    If these are followed, many issues can be avoided, starting from trivial CSRF attacks.

    ReplyDelete
    Replies
    1. Yes, this is true.
      But during a security audit, this things are very important. They will prefer POST all the time vs GET, even if is not according to REST standards.

      Delete

Post a Comment

Popular posts from this blog

Windows Docker Containers can make WIN32 API calls, use COM and ASP.NET WebForms

After the last post , I received two interesting questions related to Docker and Windows. People were interested if we do Win32 API calls from a Docker container and if there is support for COM. WIN32 Support To test calls to WIN32 API, let’s try to populate SYSTEM_INFO class. [StructLayout(LayoutKind.Sequential)] public struct SYSTEM_INFO { public uint dwOemId; public uint dwPageSize; public uint lpMinimumApplicationAddress; public uint lpMaximumApplicationAddress; public uint dwActiveProcessorMask; public uint dwNumberOfProcessors; public uint dwProcessorType; public uint dwAllocationGranularity; public uint dwProcessorLevel; public uint dwProcessorRevision; } ... [DllImport("kernel32")] static extern void GetSystemInfo(ref SYSTEM_INFO pSI); ... SYSTEM_INFO pSI = new SYSTEM_INFO(

Azure AD and AWS Cognito side-by-side

In the last few weeks, I was involved in multiple opportunities on Microsoft Azure and Amazon, where we had to analyse AWS Cognito, Azure AD and other solutions that are available on the market. I decided to consolidate in one post all features and differences that I identified for both of them that we should need to take into account. Take into account that Azure AD is an identity and access management services well integrated with Microsoft stack. In comparison, AWS Cognito is just a user sign-up, sign-in and access control and nothing more. The focus is not on the main features, is more on small things that can make a difference when you want to decide where we want to store and manage our users.  This information might be useful in the future when we need to decide where we want to keep and manage our users.  Feature Azure AD (B2C, B2C) AWS Cognito Access token lifetime Default 1h – the value is configurable 1h – cannot be modified

What to do when you hit the throughput limits of Azure Storage (Blobs)

In this post we will talk about how we can detect when we hit a throughput limit of Azure Storage and what we can do in that moment. Context If we take a look on Scalability Targets of Azure Storage ( https://azure.microsoft.com/en-us/documentation/articles/storage-scalability-targets/ ) we will observe that the limits are prety high. But, based on our business logic we can end up at this limits. If you create a system that is hitted by a high number of device, you can hit easily the total number of requests rate that can be done on a Storage Account. This limits on Azure is 20.000 IOPS (entities or messages per second) where (and this is very important) the size of the request is 1KB. Normally, if you make a load tests where 20.000 clients will hit different blobs storages from the same Azure Storage Account, this limits can be reached. How we can detect this problem? From client, we can detect that this limits was reached based on the HTTP error code that is returned by HTTP