Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: dcdisc65

Page: 1 / 2
Total 19 questions
Exam Code: DEA-C01                Update: Oct 15, 2025
Exam Name: SnowPro Advanced: Data Engineer Certification Exam

Snowflake SnowPro Advanced: Data Engineer Certification Exam DEA-C01 Exam Dumps: Updated Questions & Answers (October 2025)

Question # 1

A Data Engineer enables a result cache at the session level with the following command:

ALTER SESSION SET USE CACHED RESULT = TRUE;

The Engineer then runs the following select query twice without delay:

The underlying table does not change between executions

What are the results of both runs?

A.

The first and second run returned the same results because sample is deterministic

B.

The first and second run returned the same results, because the specific SEEDvalue was provided.

C.

The first and second run returned different results because the query is evaluated each time it is run.

D.

The first and second run returned differentresults because the query uses *instead of an explicit column list

Question # 2

What is a characteristic of the use of binding variables in JavaScript stored procedures in Snowflake?

A.

All types of JavaScript variables can be bound

B.

All Snowflake first-class objects can be bound

C.

Only JavaScript variables of type number, string and sf Date can be bound

D.

Users are restricted from binding JavaScript variables because they create SQL injection attack vulnerabilities

Question # 3

A new customer table is created by a data pipeline in a Snowflake schema where MANAGED ACCESSenabled.

…. Can gran access to the CUSTOMER table? (Select THREE.)

A.

The role that owns the schema

B.

The role that owns the database

C.

The role that owns the customer table

D.

The SYSADMIN role

E.

The SECURITYADMIN role

F.

The USERADMIN role with the manage grants privilege

Question # 4

A Data Engineer has created table t1 with datatype VARIANT:

create or replace table t1 (cl variant);

The Engineer has loaded the following JSON data set. which has information about 4 laptop models into the table:

The Engineer now wants to query that data set so that results are shown as normal structured data. The result should be 4 rows and 4 columns without the double quotes surrounding the data elements in the JSON data.

The result should be similar to the use case where the data was selected from a normal relational table z2 where t2 has string data type columns model__id. model, manufacturer, and =iccisi_r.an=. and is queried with the SQL clause select * from t2;

Which select command will produce the correct results?

A)

B)

C)

D)

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Question # 5

Database XYZ has the data_retention_time_in_days parameter set to 7 days and table xyz.public.ABC has the data_retention_time_in_daysset to 10 days.

A Developer accidentally dropped the database containing this single table 8 days ago and just discovered the mistake.

How can the table be recovered?

A.

undrop database xyz;

B.

create -able abc_restore as select * from xyz.public.abc at {offset => -60*60*24*8};

C.

create table abc_restore clone xyz.public.abc at (offset => -3€0G*24*3);

D.

Create a Snowflake Support case lo restore the database and tab e from "a i-safe

Question # 6

A company has an extensive script in Scala that transforms data by leveraging DataFrames. A Data engineer needs to move these transformations to Snowpark.

…characteristics of data transformations in Snowpark should be considered to meet this requirement? (Select TWO)

A.

It is possible to join multiple tables using DataFrames.

B.

Snowpark operations are executed lazily on the server.

C.

User-Defined Functions (UDFs) are not pushed down to Snowflake

D.

Snowpark requires a separate cluster outside of Snowflake for computations

E.

Columns in different DataFrames with the same name should be referred to with squared brackets

Question # 7

The following is returned fromSYSTEMCLUSTERING_INFORMATION () for a tablenamed orders with adate column named O_ORDERDATE:

What does the total_constant_partition_count value indicate about this table?

A.

The table is clustered very well on_ORDERDATE, as there are 493 micro-partitions that could not be significantly improved by reclustering

B.

The table is not clustered well on O_ORDERDATE, as there are 493 micro-partitions where the range of values in that column overlap with every other micro partition in

the table.

C.

The data inO_ORDERDATEdoes not change very often as there are 493 micro-partitionscontaining rows where that column has not been modified since the row was

created

D.

The data inO_ORDERDATEhas a very low cardinality as there are 493 micro-partitions where there is only a single distinct value in that column for all rows in the

micro-partition

Question # 8

A stream called TRANSACTIONS_STM is created on top of a transactions table in a continuous pipeline running in Snowflake. After a couple of months, the TRANSACTIONS table is renamed transactiok3_raw to comply with new naming standards

What will happen to the TRANSACTIONS _STM object?

A.

TRANSACTIONS _STMwill keep working as expected

B.

TRANSACTIONS _STMwill be stale and will need to be re-created

C.

TRANSACTIONS _STMwill be automatically renamedTRANSACTIONS _RAW_STM.

D.

Reading from the traksactioks_3T>: stream will succeed for some time after the expected STALE_TIME.

Question # 9

The JSON below is stored in a variant column named v in a table named jCustRaw:

Which query will return one row per team member (stored in the teamMembers array) along all of the attributes of each team member?

A)

B)

C)

D)

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Question # 10

A CSV file around 1 TB in size is generated daily on an on-premise server A corresponding table. Internal stage, and file format have already been created in Snowflake to facilitate the data loading process

How can the process of bringing the CSV file into Snowflake be automated using the LEAST amount of operational overhead?

A.

Create a task in Snowflake that executes once a day and runs a copy into statement that references the internal stage The internal stage will read the files directly

from the on-premise server and copy the newest file into the table from the on-premise server to the Snowflake table

B.

On the on-premise server schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage Create a task that executes once a

day m Snowflake and runs a OOPY WTO statement that references the internal stage Schedule the task to start after the file lands in the internal stage

C.

On the on-premise server schedule a SQL file to run using SnowSQL that executes a PUT to push a specific file to the internal stage. Create a pipe that runs a copy

into statement that references the internal stage Snowpipe auto-ingest will automatically load the file from the internal stage when the new file lands in the internal

stage.

D.

On the on premise server schedule a Python file that uses the Snowpark Python library. The Python script will read the CSV data into a DataFrame and generate an

insert into statement that will directly load into the table The script will bypass the need to move a file into an internal stage

Page: 1 / 2
Total 19 questions

Most Popular Certification Exams

Payment

       

Contact us

dumpscollection live chat

Site Secure

mcafee secure

TESTED 16 Oct 2025