2

I am creating an application that allows users to construct complex SELECT statements. The SQL that is generated cannot be trusted, and is totally arbitrary.

I need a way to execute the untrusted SQL in relative safety. My plan is to create a database user who only has SELECT privileges on the relevant schemas and tables. The untrusted SQL would be executed as that user.

What could possibility go wrong with that? :)

If we assume postgres itself does not have critical vulnerabilities, the user could do a bunch of cross joins and overload the database. That could be mitigated with a session timeout.

I feel like there is a lot more than could go wrong, but I'm having trouble coming up with a list.

EDIT:

Based on the comments/answers so far, I should note that the number of people using this tool at any given time will be very near 0.

3
  • Send the SQL and a username to a stored proc, and setup a rollback in the transaction? Commented Aug 30, 2013 at 16:02
  • PostGres doesn't seem like it has the ability to limit the resources a particular query uses. If this is correct it's a poor choice for this type of application: wiki.postgresql.org/wiki/Priorities PostgreSQL has no facilities to limit what resources a particular user, query, or database consumes, Commented Aug 30, 2013 at 16:04
  • You can set up a read_only replica for untrusted queries. This way they wont affect in any way a main server. Commented Aug 30, 2013 at 17:43

4 Answers 4

3

SELECT queries can not change anything in databse. Lack of dba privileges guarantee that any global settings can not be changed. So, overload is truely the only concern.

Onerload can be result of complex queryies or too much simple queries.

  • Too complex queryies can be ruled out by setting statement_timeout in postgresql.conf

  • Receiving plenties of simple queryies can be avoided too. Firstly, you can set parallel connection limit per user (alter user with CONNECTION LIMIT). And if you have some interface program between user and postgresql, you can additionally (1) add some extra wait after each query completion, (2) introduce CAPTCHA to avoid automated DOS-attack

ADDITION: PostgreSQL public system functions give many possible attack vectors. They can be called like select pg_advisory_lock(1) and every user have privilege to call them. So, you should restrict access to them. Good option is creating whitelist of all "callable words" or, more precisely, identifiers that can be used with ( after them. And rule out all queryies that include call-like construct identifier ( with an identifier not in white list.

Sign up to request clarification or add additional context in comments.

8 Comments

In postgres SELECT query can change anything. SELECT destroy_database_function();
wasn't aware of statement_timeout, good idea. +1 for that. CAPTCHA i find somewhat annoying, but maybe some other variation of it, such as a simple drag/drop task or something to initiate the query would work too.
For SELECT destroy_database_function() you need such a function and an EXECUTE privilege on it. And by the statement of problem you have only SELECT privileges on some tables :) I beleive that SELECT privileges on plain tables are far insufficient to break something.
The are already functions like pg_advisory_lock() that can be used to DoS the sever. And they are awaiable to public so you must know ALL posible DoS cases to protect agains them. The only solution I see - set up a dedicated DB replica for such queries
Yes, public system functions like pg_advisory_lock() are the problem, thanks Igor. But it is not true that you must know all possible attack vectors to block them - you can just create function whitelist and rule out all queries, containing calls of other functions. And dedicated DB replica is not a solution - it will itself be subject to DoS attacks, making our 'querying workbench' unstable.
|
1

Things that come to mind, in addition to having the user SELECT-only and revoking privileges on functions:

  • Read-only transaction. When a transaction is started by BEGIN READ ONLY, or SET TRANSACTION READ ONLY as its first instruction, it cannot write anything, independantly of the user permissions.

  • At the client side, if you want to restrict it to one SELECT, better use a SQL submission function that does not accept several queries bundled into one. For instance, the swiss-knife PQexec method of the libpq API does accept such queries and so does every driver function that is built on top of it, like PHP's pg_query.

  • http://sqlfiddle.com/ is a service dedicated to running arbitrary SQL statements which may be seen somehow as a proof-of-concept that it's doable without being hacked or DDos'ed all day long.

Comments

0

The problem with this, is i'm not sure if the sql itself will still continue to run in the background after a session timeout (can't really find much evidence either way via google and haven't had any real experience where I've attempted it myself either). If you're limiting to just select access, i think this is about the worst that could happen though. The real issue would be what happens if you got a hundred users trying to do complex cross joins? Session timeout dropping the query or not, it'll put a real heavy load on the database (could very easily be enough to pull the database down entirely)

Comments

0

The only way (from my point of view) to protect yourself against DoS on main server with crafted queries is to set up a read only replica of the Postgres DB and a special limited user on this replica DB. This way the main Postgres server wont be affected by queries on replica.

Also you will get hot standby / continuous replication DB for the case, when main DB fails for some reason.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.