.Guarantee being compatible with multiple structures, including.NET 6.0,. Internet Framework 4.6.2, and.NET Requirement 2.0 as well as above.Decrease dependences to stop model disagreements and the requirement for tiing redirects.Recording Sound Info.Some of the primary capabilities of the SDK is actually audio transcription. Developers can record audio documents asynchronously or in real-time. Below is an example of exactly how to transcribe an audio file:.making use of AssemblyAI.using AssemblyAI.Transcripts.var customer = brand-new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For nearby documents, identical code could be utilized to obtain transcription.wait for utilizing var stream = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.stream,.brand new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK additionally supports real-time sound transcription using Streaming Speech-to-Text. This component is actually especially practical for requests needing quick processing of audio information.making use of AssemblyAI.Realtime.wait for utilizing var transcriber = brand new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Ultimate: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for receiving sound from a mic for instance.GetAudio( async (part) => await transcriber.SendAudioAsync( piece)).wait for transcriber.CloseAsync().Using LeMUR for LLM Applications.The SDK combines along with LeMUR to make it possible for programmers to create sizable foreign language style (LLM) functions on vocal data. Below is actually an instance:.var lemurTaskParams = new LemurTaskParams.Prompt="Offer a brief summary of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Knowledge Styles.In addition, the SDK includes integrated support for audio intellect designs, making it possible for conviction study and also various other state-of-the-art attributes.var transcript = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = correct. ).foreach (var result in transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more information, check out the formal AssemblyAI blog.Image resource: Shutterstock.