Things you need to know about Snowflake RBAC

Snowflake RBAC defines who can access and perform operations on specific objects (tables, views, schemas, etc.) within an account. Roles are the entities to which privileges on securable database objects are often granted and revoked and are assigned to users to permit them to perform actions required for business functions in their organization.

This blog post presents a strategy for developing just such a security model using Snowflake RBAC (Role-Based Access Control). It recommends an approach that distinguishes object access roles from user functional roles then describes the way to build a unified security model that mixes both sorts of roles.

Roles in Snowflake RBAC

A Role in Snowflake is analogous to a task in other databases which want to control the access and privileges on various database objects.It is a worldwide object meaning its scope isn’t confined to any single database.

From a security point of view, Snowflake is meant to be a task Based Access system (RBAC system) during which access privileges are assigned to the roles, which are successively assigned to the top users. Similar to other systems, Roles in Snowflake also can be assigned to other roles thereby creating a task hierarchy.

By default, the Snowflake system has five built-in system roles: AccountAdmin, SecurityAdmin, UserAdmin, SysAdmin, and Public.

The AccountAdmin role is the top-level role within the system and will be granted only to a limited/controlled number of users in your account.

It is almost like the SYSDBA role in Oracle and SYSADMIN role in SQL Server.


It may not always be sufficient to easily turn an outlined service on or off as required. In some cases, demand could also be greater than the utmost capability which was originally defined. For this scenario, Snowflake offers auto-scaling. 

Let’s imagine a use case where you’ve got peak hours – specific times where you need more partitions than usual. During this case, Hardening Snowflake allows you to define a minimum and maximum cluster, within which it automatically scales horizontally. Essentially, the system automatically duplicates a predefined base cluster as repeatedly as is important to satisfy the elevated demand, up to the utmost size you’ve got configured. 

This enables more processing throughput, particularly within the number of concurrent queries which may be handled. As demand drops off outside of peak times, the system automatically downscales in real-time to the preconfigured minimum setting.


Replicated databases can’t be used because it is usually in READ mode, hence it’s required to make just like a REPLICATED ACCEPTANCE database in a Development account. The clone database is going to be a Development database with updated RBAC as per development requirements. Post cloning the principles of knowledge masking are going to be applied on the event database, the principles are as per enterprise security and privacy standards on selected tables, fields, or rows which require to be masked.

A Simplified RBAC Model

The most important aspect of this is often to stay access privileges to things mutually exclusive of privilege inheritance for a given Role. We shall demonstrate this with an RBAC prototype shortly. Once you’ve got to come to terms with the thought, subsequent steps are going to be to divide roles into logical levels. This is often to simplify the RBAC requirement capture and also to segregate object access and inheritance of privileges.

0 thoughts on “Things you need to know about Snowflake RBAC

Leave a Reply

Your email address will not be published. Required fields are marked *