Skip to main content
Formatting
Source Link
Brandon
  • 4.6k
  • 21
  • 21

Pro: Saved half a line of code in the API implementation Cons

Cons:

Pro: Saved half a line of code in the API implementation Cons:

Pro: Saved half a line of code in the API implementation

Cons:

Source Link
Brandon
  • 4.6k
  • 21
  • 21

I went through almost the same analysis with a former coworker. He was adding a new API endpoint and wanted to pass in a complex JSON object into the query string instead of separate parameters.

Like yourself, I had reservations.

Exactly what the pros and cons are will depend on your specific product needs, architecture, team structure, and other factors.

In our case, I found the tradeoffs were simply:

Pro: Saved half a line of code in the API implementation Cons:

  • Swagger (our API documentation tool) would have no intuitive way to document the structure of this API. API documentation, accessibility, and discoverability were important to us.
  • Inconsistent with the rest of the world. Adhering to common conventions and expectations simplifies things and helps avoid mistakes.

That said, I don't think this pattern is necessarily bad. In certain cases, like building up a dynamic set of search filters, it might make sense. Your use case might be a good candidate for this.

I don't totally agree with the pushback against GET that you have received in other answers. GET is the semantically correct HTTP verb because you are getting data. You're not mutating any state. It's idempotent and cacheable. POST is not.

GET requests can accept a body, just like POST requests and POST requests can accept a query string. There is a misconception that GET means query string and POST means body. They are orthogonal.

Factors I'd consider in your situation:

  • Documentability, if this is important to your organization and/or product
  • Tooling - Most HTTP routing frameworks have middleware to automatically parse query string parameters and expose them as a map. For example, in a node.js express app, I can check req.query.filters to get the filters. You'd have to do your own parsing if you go the JSON route, but that's not hard. It's a simple JSON.parse call.
  • Validation - There are modules which can automatically validate a request based on a schema you supply. Moving your inputs to a JSON object may create a black box, forcing you to validate inputs yourself (and we all know that inadequate input validation is one of the leading causes of security vulnerabilities)
  • Complexity - Do you really need to support infinite filters or even a dynamic list of filters? Do the operators need to be configurable like that? This JSON object design is highly extensible. If there are only 4 things to filter on and they always work the same way, hard coding will save you a ton of time.
  • Testing overhead - following from the complexity point above, every combination of criteria needs to be tested. If your API allows for any operator across any field, you need to test each one of these cases. If your clients only use the API a single way, then you have a bunch of unused use cases you are stuck supporting. For example, if the frontend always performs a wildcard search, then testing the = case for name is a waste because it's never really used, but your API supports it, so it better work.

Despite all my concerns with this approach, Elasticsearch does the JSON pattern and it works quite well for them. But Elasticsearch is a search engine. The set of filters being passed in, operators, etc. need to be dynamic because the JSON structure is actually meant to be a search query. Elasticsearch supports any sort of user-defined schema, so they expose a general query language as JSON.

If you have a few inputs on a web page and they map directly to known SQL predicates, you may be going overkill with the JSON pattern.

This is why it really matters what your product actually is.

Solve for the problem you have, not a problem that you may have some day.