使用Player FM应用程序离线!
Building LLM-Based Applications with Azure OpenAI with Jay Emery - #657
Manage episode 386382494 series 2355587
Today we’re joined by Jay Emery, director of technical sales & architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes.
The complete show notes for this episode can be found at twimlai.com/go/657.
710集单集
Building LLM-Based Applications with Azure OpenAI with Jay Emery - #657
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Manage episode 386382494 series 2355587
Today we’re joined by Jay Emery, director of technical sales & architecture at Microsoft Azure. In our conversation with Jay, we discuss the challenges faced by organizations when building LLM-based applications, and we explore some of the techniques they are using to overcome them. We dive into the concerns around security, data privacy, cost management, and performance as well as the ability and effectiveness of prompting to achieve the desired results versus fine-tuning, and when each approach should be applied. We cover methods such as prompt tuning and prompt chaining, prompt variance, fine-tuning, and RAG to enhance LLM output along with ways to speed up inference performance such as choosing the right model, parallelization, and provisioned throughput units (PTUs). In addition to that, Jay also shared several intriguing use cases describing how businesses use tools like Azure Machine Learning prompt flow and Azure ML AI Studio to tailor LLMs to their unique needs and processes.
The complete show notes for this episode can be found at twimlai.com/go/657.
710集单集
所有剧集
×欢迎使用Player FM
Player FM正在网上搜索高质量的播客,以便您现在享受。它是最好的播客应用程序,适用于安卓、iPhone和网络。注册以跨设备同步订阅。