Guidance on User Management and Row-Level Security with PostgREST #3696
-
Hi everyone, I’m considering using PostgREST as the backend API for a new product, especially since I plan to use PostgreSQL anyway. PostgREST seems like it could significantly reduce the time to market for new features. However, after reading through the documentation, there are still a few things I’m uncertain about, and I’d really appreciate some guidance from those with more experience. To illustrate, let’s consider the following business model: Imagine I have a table In this scenario, users can have one of three permissions: ‘dashboard’, ‘users’, or ‘managers’.
Q1. But does this approach also apply to ‘normal’ application users, as in this example? For instance, if I have 1,000 app users, should I also create 1,000 PostgreSQL users with roles applied? Or do you store the user’s role as a string in a For reference, I’m accustomed to traditional backend applications, where you’d adjust the query string based on the user making the request, like this:
Of course, I could achieve the same result by altering the client-side URL request to PostgREST, but that would make the client application responsible for always modifying the URL correctly. How can I enforce these rules for future external applications, integrations, or even new front-end developers who might not fully understand the business model? I assume the solution involves PostgreSQL’s Row-Level Security Policies, but I’m unclear on how user roles would fit into this approach. What’s the most common and proper way to handle this? Q2. How would I apply the same row-level security policy to this table, given that it doesn’t have a user_id field? If the solution is to create a row-level security policy using a subquery that joins the TODO and TODO_DETAILS tables, would that provide good performance? Is it considered a best practice? Q3.
And then automatically perform all the necessary inserts in PostgreSQL? Thank you in advance! These might be basic questions, but after reading through the documentation and running some tests in a Docker container, I’m still unsure of the “proper” way to address these issues. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 5 replies
-
You want #818, which we don't support, yet. |
Beta Was this translation helpful? Give feedback.
-
I have done both:
Both work and depending on the exact case might make more sense. Imho, both patterns are fine. Denormalized is faster, but at the cost of more storage (you'll need the additional columns, and possibly some more, redundant, unique indexes on |
Beta Was this translation helpful? Give feedback.
-
I have a big production instance running with this. We currently have ~ 70k users. Because of a more complex role system, with scopes for groups etc., we have around ~ 140k pg roles with that. This works fine so far, except for one limitation: We had to make the |
Beta Was this translation helpful? Give feedback.
The same as for all other issues: If somebody finds the time to do it...
There has been a first try here: #3226
Because of schema isolation, I have the
todos
andtodo_details
table normalized in adata
schema. In the exposed / public / api schema (however, you call it), I have atodos
view, which does the JOIN totodo_details
already, and exposes it as an array. I then have INSTEAD OF triggers on that view to manage the insert. Those triggers take care of inserting this array into thetodo_details
table via MERGE. I posted an example of that in some other discussion a wh…