The Snowflake SnowPro Specialty: Gen AI certification validates your ability to design, build, and optimize generative AI solutions using the Snowflake Data Cloud. It is a specialty-level certification intended for professionals who already understand Snowflake fundamentals and want to demonstrate expertise in AI/ML and generative AI workflows inside Snowflake.
---------- Question 1
Which view should an administrator query to monitor the number of tokens consumed by specific Cortex LLM functions over the last 30 days for cost tracking purposes?
- CORTEX_FUNCTIONS_USAGE_HISTORY
- WAREHOUSE_METERING_HISTORY
- CORTEX_BILLING_METRICS
- DATA_TRANSFER_HISTORY
---------- Question 2
What is a key document requirement for uploading files to Document AI to ensure successful model training and extraction?
- Files must be in .xlsx or .csv format only.
- Individual files must not exceed the size limit, typically 100MB, and should be in supported formats like PDF.
- Documents must be pre-encrypted using a custom Snowflake managed key.
- Files must be converted to raw HTML before they can be processed by the model.
---------- Question 3
When training a model in Document AI, what is the recommended best practice for optimizing the questions used to extract values from documents?
- Use highly technical jargon and internal database column names as questions.
- Use natural language questions that are clear, concise, and descriptive of the target value.
- Format all questions as complex SQL regex patterns to guide the model.
- Limit questions to single words like Date or Price to avoid model confusion.
---------- Question 4
In the context of Cortex LLM Functions, what is the specific purpose of the CORTEX_ENABLED_CROSS_REGION parameter when set to TRUE at the account level?
- It allows the Model Registry to sync versions across different Snowflake regions
- It enables Document AI to process files stored in stages located in different regions
- It allows Snowflake to route LLM requests to other regions if local capacity is unavailable
- It synchronizes RBAC roles and privileges across a global Snowflake organization
---------- Question 5
A data engineer needs to generate a JSON object from a large block of unstructured text using the SNOWFLAKE.CORTEX.COMPLETE function. Which approach ensures the output strictly adheres to a valid JSON schema?
- Use the SENTIMENT function and cast the result to VARIANT
- Pass a JSON schema into the options argument of the COMPLETE function
- Wrap the COMPLETE function inside a Snowflake Scripting TRY-CATCH block
- Increase the temperature parameter to 1.0 to encourage formatting
---------- Question 6
In a RAG application using Snowflake, a developer needs to convert a large collection of PDF text into a format suitable for semantic search. Which specific function should be used to generate the numerical representations required for a vector column?
- SNOWFLAKE.CORTEX.EMBED_TEXT_1024
- SNOWFLAKE.CORTEX.PARSE_DOCUMENT
- SNOWFLAKE.CORTEX.CLASSIFY_TEXT
- SNOWFLAKE.CORTEX.EXTRACT_ANSWER
---------- Question 7
To prevent the LLM from generating responses that contain hate speech or socially sensitive content, which feature should be utilized within the COMPLETE function arguments as a guardrail?
- Cortex Search
- Cortex Guard
- CORTEX_MODELS_ALLOWLIST
- Role-Based Access Control
---------- Question 8
A data engineer needs to generate a JSON-formatted response from a Snowflake Cortex LLM to ensure the output can be parsed by a downstream application. Which function or argument should be utilized?
- SNOWFLAKE.CORTEX.COMPLETE with Structured Outputs
- SNOWFLAKE.CORTEX.EXTRACT_ANSWER
- SNOWFLAKE.CORTEX.PARSE_DOCUMENT
- The SPLIT_TEXT_RECURSIVE_CHARACTER helper function
---------- Question 9
To monitor and optimize Snowflake Cortex spending, which system view should an administrator query to find the specific number of tokens processed for every individual function call made in the last 24 hours?
- METERING_DAILY_HISTORY
- CORTEX_FUNCTIONS_USAGE_HISTORY
- CORTEX_FUNCTIONS_QUERY_USAGE_HISTORY
- WAREHOUSE_METERING_HISTORY
---------- Question 10
When configuring Cortex Analyst, what is the purpose of the Verified Query Repository (VQR)?
- To store vector embeddings of the unstructured documentation
- To provide a set of 'ground truth' SQL examples that the model uses to improve accuracy
- To maintain a list of all users allowed to access the semantic views
- To cache the results of frequently asked natural language questions
Are they useful?
Click here to get 330 more questions to pass this certification at the first try! Explanation for each option is included!
Follow the below LINKEDIN channel to stay updated about 89+ exams!

Comments
Post a Comment