admin管理员组

文章数量:1550529

Practice Exam: Oracle Cloud Infrastructure Generative AI Professional

  • 1. In the simplified workflow for managing and querying vector data, what is the role of indexing?
  • 2. In which scenario is soft prompting appropriate compared to other training styles?
  • 3. Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?
  • 4. When does a chain typically interact with memory in a run within the LangChain framework?
  • 5. What do prompt templates use for templating in language model applications?
  • 6. What does a cosine distance of 0 indicate about the relationship between two embeddings?
  • 7. What does accuracy measure in the context of fine-tuning results for a generative model?
  • 8. What is the purpose of Retrievers in LangChain?
  • 9. Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?
  • 10. Which statement is true about string prompt templates and their capability regarding variables?
  • 11. Which LangChain component is responsible for generating the linguistic output in a chatbot system?
  • 12. How does the temperature setting in a decoding algorithm influence the probability distribution over the vocabulary?
  • 13. What does the Loss metric indicate about a model's predictions?
  • 14. How are documents usually evaluated in the simplest form of keyword-based search?
  • 15. How does a presence penalty function in language model generation?
  • 16. What is the purpose of Retrieval Augmented Generation (RAG) in text generation?
  • 17. Accuracy in vector databases contributes to the effectiveness of Large Language Models (LLMs) by preserving a specific type of relationship. What is the nature of these relationships, and why are they crucial for language models?
  • 18. In the context of generating text with a Large Language Model (LLM), what does the process of greedy decoding entail?
  • 19. How does the structure of vector databases differ from traditional relational databases?
  • 20. What does the RAG Sequence model do in the context of generating a response?
  • 21. How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?
  • 22. Why is it challenging to apply diffusion models to text generation?
  • 23. What is LangChain?
  • 24. Given the following code block: history = StreamlitChatMessageHistory(key="chat_messages") memory = ConversationBufferMemory(chat_memory=history)
  • 25. When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

1. In the simplified workflow for managing and querying vector data, what is the role of indexing?

2. In which scenario is soft prompting appropriate compared to other training styles?

3. Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

4. When does a chain typically interact with memory in a run within the LangChain framework?

5. What do prompt templates use for templating in language model applications?

6. What does a cosine distance of 0 indicate about the relationship between two embeddings?

7. What does accuracy measure in the context of fine-tuning results for a generative model?

8. What is the purpose of Retrievers in LangChain?

9. Which is a characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

本文标签: ORACLECloudpracticeExamAI