Build with AI models that can transcribe and understand audio
With a single API call, get access to AI models built on the latest AI breakthroughs to transcribe and understand audio and speech data securely at large scale.
Visit our AssemblyAI API Documentation to get an overview of our models!
pip install -U assemblyai
Before starting, you need to set the API key. If you don't have one yet, sign up for one!
import assemblyai as aai
# set the API key
aai.settings.api_key = f"{ASSEMBLYAI_API_KEY}"
Transcribe a local audio file
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("./my-local-audio-file.wav")
print(transcript.text)
Transcribe an URL
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
print(transcript.text)
Transcribe binary data
import assemblyai as aai
transcriber = aai.Transcriber()
# Binary data is supported directly:
transcript = transcriber.transcribe(data)
# Or: Upload data separately:
upload_url = transcriber.upload_file(data)
transcript = transcriber.transcribe(upload_url)
Export subtitles of an audio file
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
# in SRT format
print(transcript.export_subtitles_srt())
# in VTT format
print(transcript.export_subtitles_vtt())
List all sentences and paragraphs
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
sentences = transcript.get_sentences()
for sentence in sentences:
print(sentence.text)
paragraphs = transcript.get_paragraphs()
for paragraph in paragraphs:
print(paragraph.text)
Search for words in a transcript
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3")
matches = transcript.word_search(["price", "product"])
for match in matches:
print(f"Found '{match.text}' {match.count} times in the transcript")
Add custom spellings on a transcript
import assemblyai as aai
config = aai.TranscriptionConfig()
config.set_custom_spelling(
{
"Kubernetes": ["k8s"],
"SQL": ["Sequel"],
}
)
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3", config)
print(transcript.text)
Upload a file
import assemblyai as aai
transcriber = aai.Transcriber()
upload_url = transcriber.upload_file(data)
Delete a transcript
import assemblyai as aai
transcript = aai.Transcriber().transcribe(audio_url)
aai.Transcript.delete_by_id(transcript.id)
List transcripts
This returns a page of transcripts you created.
import assemblyai as aai
transcriber = aai.Transcriber()
page = transcriber.list_transcripts()
print(page.page_details) # Page details
print(page.transcripts) # List of transcripts
You can apply filter parameters:
params = aai.ListTranscriptParameters(
limit=3,
status=aai.TranscriptStatus.completed,
)
page = transcriber.list_transcripts(params)
You can also paginate over all pages by using the helper property before_id_of_prev_url
.
The prev_url
always points to a page with older transcripts. If you extract the before_id
of the prev_url
query parameters, you can paginate over all pages from newest to oldest.
transcriber = aai.Transcriber()
params = aai.ListTranscriptParameters()
page = transcriber.list_transcripts(params)
while page.page_details.before_id_of_prev_url is not None:
params.before_id = page.page_details.before_id_of_prev_url
page = transcriber.list_transcripts(params)
Use LeMUR to summarize an audio file
import assemblyai as aai
audio_file = "https://assembly.ai/sports_injuries.mp3"
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(audio_file)
prompt = "Provide a brief summary of the transcript."
result = transcript.lemur.task(
prompt, final_model=aai.LemurModel.claude3_5_sonnet
)
print(result.response)
Or use the specialized Summarization endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:
import assemblyai as aai
audio_url = "https://assembly.ai/meeting.mp4"
transcript = aai.Transcriber().transcribe(audio_url)
result = transcript.lemur.summarize(
final_model=aai.LemurModel.claude3_5_sonnet,
context="A GitLab meeting to discuss logistics",
answer_format="TLDR"
)
print(result.response)
Use LeMUR to ask questions about your audio data
import assemblyai as aai
audio_file = "https://assembly.ai/sports_injuries.mp3"
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(audio_file)
prompt = "What is a runner's knee?"
result = transcript.lemur.task(
prompt, final_model=aai.LemurModel.claude3_5_sonnet
)
print(result.response)
Or use the specialized Q&A endpoint that requires no prompt engineering and facilitates more deterministic and structured outputs:
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/customer.mp3")
# ask some questions
questions = [
aai.LemurQuestion(question="What car was the customer interested in?"),
aai.LemurQuestion(question="What price range is the customer looking for?"),
]
result = transcript.lemur.question(
final_model=aai.LemurModel.claude3_5_sonnet,
questions=questions)
for q in result.response:
print(f"Question: {q.question}")
print(f"Answer: {q.answer}")
Use LeMUR with customized input text
import assemblyai as aai
transcriber = aai.Transcriber()
config = aai.TranscriptionConfig(
speaker_labels=True,
)
transcript = transcriber.transcribe("https://example.org/customer.mp3", config=config)
# Example converting speaker label utterances into LeMUR input text
text = ""
for utt in transcript.utterances:
text += f"Speaker {utt.speaker}:\n{utt.text}\n"
result = aai.Lemur().task(
"You are a helpful coach. Provide an analysis of the transcript "
"and offer areas to improve with exact quotes. Include no preamble. "
"Start with an overall summary then get into the examples with feedback.",
input_text=text,
final_model=aai.LemurModel.claude3_5_sonnet
)
print(result.response)
Apply LeMUR to multiple transcripts
import assemblyai as aai
transcriber = aai.Transcriber()
transcript_group = transcriber.transcribe_group(
[
"https://example.org/customer1.mp3",
"https://example.org/customer2.mp3",
],
)
result = transcript_group.lemur.task(
context="These are calls of customers asking for cars. Summarize all calls and create a TLDR.",
final_model=aai.LemurModel.claude3_5_sonnet
)
print(result.response)
Delete data previously sent to LeMUR
import assemblyai as aai
# Create a transcript and a corresponding LeMUR request that may contain senstive information.
transcriber = aai.Transcriber()
transcript_group = transcriber.transcribe_group(
[
"https://example.org/customer1.mp3",
],
)
result = transcript_group.lemur.summarize(
context="Customers providing sensitive, personally identifiable information",
answer_format="TLDR"
)
# Get the request ID from the LeMUR response
request_id = result.request_id
# Now we can delete the data about this request
deletion_result = aai.Lemur.purge_request_data(request_id)
print(deletion_result)
PII Redact a transcript
import assemblyai as aai
config = aai.TranscriptionConfig()
config.set_redact_pii(
# What should be redacted
policies=[
aai.PIIRedactionPolicy.credit_card_number,
aai.PIIRedactionPolicy.email_address,
aai.PIIRedactionPolicy.location,
aai.PIIRedactionPolicy.person_name,
aai.PIIRedactionPolicy.phone_number,
],
# How it should be redacted
substitution=aai.PIISubstitutionPolicy.hash,
)
transcriber = aai.Transcriber()
transcript = transcriber.transcribe("https://example.org/audio.mp3", config)
To request a copy of the original audio file with the redacted information "beeped" out, set redact_pii_audio=True
in the config.
Once the Transcript
object is returned, you can access the URL of the redacted audio file with get_redacted_audio_url
, or save the redacted audio directly to disk with save_redacted_audio
.
import assemblyai as aai
transcript = aai.Transcriber().transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(
redact_pii=True,
redact_pii_policies=[aai.PIIRedactionPolicy.person_name],
redact_pii_audio=True
)
)
redacted_audio_url = transcript.get_redacted_audio_url()
transcript.save_redacted_audio("redacted_audio.mp3")
Summarize the content of a transcript over time
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(auto_chapters=True)
)
for chapter in transcript.chapters:
print(f"Summary: {chapter.summary}") # A one paragraph summary of the content spoken during this timeframe
print(f"Start: {chapter.start}, End: {chapter.end}") # Timestamps (in milliseconds) of the chapter
print(f"Healine: {chapter.headline}") # A single sentence summary of the content spoken during this timeframe
print(f"Gist: {chapter.gist}") # An ultra-short summary, just a few words, of the content spoken during this timeframe
Summarize the content of a transcript
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(summarization=True)
)
print(transcript.summary)
By default, the summarization model will be informative
and the summarization type will be bullets
. Read more about summarization models and types here.
To change the model and/or type, pass additional parameters to the TranscriptionConfig
:
config=aai.TranscriptionConfig(
summarization=True,
summary_model=aai.SummarizationModel.catchy,
summary_type=aai.SummarizationType.headline
)
Detect sensitive content in a transcript
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(content_safety=True)
)
# Get the parts of the transcript which were flagged as sensitive
for result in transcript.content_safety.results:
print(result.text) # sensitive text snippet
print(result.timestamp.start)
print(result.timestamp.end)
for label in result.labels:
print(label.label) # content safety category
print(label.confidence) # model's confidence that the text is in this category
print(label.severity) # severity of the text in relation to the category
# Get the confidence of the most common labels in relation to the entire audio file
for label, confidence in transcript.content_safety.summary.items():
print(f"{confidence * 100}% confident that the audio contains {label}")
# Get the overall severity of the most common labels in relation to the entire audio file
for label, severity_confidence in transcript.content_safety.severity_score_summary.items():
print(f"{severity_confidence.low * 100}% confident that the audio contains low-severity {label}")
print(f"{severity_confidence.medium * 100}% confident that the audio contains mid-severity {label}")
print(f"{severity_confidence.high * 100}% confident that the audio contains high-severity {label}")
Read more about the content safety categories.
By default, the content safety model will only include labels with a confidence greater than 0.5 (50%). To change this, pass content_safety_confidence
(as an integer percentage between 25 and 100, inclusive) to the TranscriptionConfig
:
config=aai.TranscriptionConfig(
content_safety=True,
content_safety_confidence=80, # only include labels with a confidence greater than 80%
)
Analyze the sentiment of sentences in a transcript
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(sentiment_analysis=True)
)
for sentiment_result in transcript.sentiment_analysis:
print(sentiment_result.text)
print(sentiment_result.sentiment) # POSITIVE, NEUTRAL, or NEGATIVE
print(sentiment_result.confidence)
print(f"Timestamp: {sentiment_result.start} - {sentiment_result.end}")
If speaker_labels
is also enabled, then each sentiment analysis result will also include a speaker
field.
# ...
config = aai.TranscriptionConfig(sentiment_analysis=True, speaker_labels=True)
# ...
for sentiment_result in transcript.sentiment_analysis:
print(sentiment_result.speaker)
Identify entities in a transcript
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(entity_detection=True)
)
for entity in transcript.entities:
print(entity.text) # i.e. "Dan Gilbert"
print(entity.entity_type) # i.e. EntityType.person
print(f"Timestamp: {entity.start} - {entity.end}")
Detect topics in a transcript (IAB Classification)
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(iab_categories=True)
)
# Get the parts of the transcript that were tagged with topics
for result in transcript.iab_categories.results:
print(result.text)
print(f"Timestamp: {result.timestamp.start} - {result.timestamp.end}")
for label in result.labels:
print(label.label) # topic
print(label.relevance) # how relevant the label is for the portion of text
# Get a summary of all topics in the transcript
for label, relevance in transcript.iab_categories.summary.items():
print(f"Audio is {relevance * 100}% relevant to {label}")
Identify important words and phrases in a transcript
import assemblyai as aai
transcriber = aai.Transcriber()
transcript = transcriber.transcribe(
"https://example.org/audio.mp3",
config=aai.TranscriptionConfig(auto_highlights=True)
)
for result in transcript.auto_highlights.results:
print(result.text) # the important phrase
print(result.rank) # relevancy of the phrase
print(result.count) # number of instances of the phrase
for timestamp in result.timestamps:
print(f"Timestamp: {timestamp.start} - {timestamp.end}")
Read more about our Real-Time service.
Stream your microphone in real-time
import assemblyai as aai
def on_open(session_opened: aai.RealtimeSessionOpened):
"This function is called when the connection has been established."
print("Session ID:", session_opened.session_id)
def on_data(transcript: aai.RealtimeTranscript):
"This function is called when a new transcript has been received."
if not transcript.text:
return
if isinstance(transcript, aai.RealtimeFinalTranscript):
print(transcript.text, end="\r\n")
else:
print(transcript.text, end="\r")
def on_error(error: aai.RealtimeError):
"This function is called when an error occurs."
print("An error occured:", error)
def on_close():
"This function is called when the connection has been closed."
print("Closing Session")
# Create the Real-Time transcriber
transcriber = aai.RealtimeTranscriber(
on_data=on_data,
on_error=on_error,
sample_rate=44_100,
on_open=on_open, # optional
on_close=on_close, # optional
)
# Start the connection
transcriber.connect()
# Open a microphone stream
microphone_stream = aai.extras.MicrophoneStream()
# Press CTRL+C to abort
transcriber.stream(microphone_stream)
transcriber.close()
Transcribe a local audio file in real-time
import assemblyai as aai
def on_data(transcript: aai.RealtimeTranscript):
"This function is called when a new transcript has been received."
if not transcript.text:
return
if isinstance(transcript, aai.RealtimeFinalTranscript):
print(transcript.text, end="\r\n")
else:
print(transcript.text, end="\r")
def on_error(error: aai.RealtimeError):
"This function is called when the connection has been closed."
print("An error occured:", error)
# Create the Real-Time transcriber
transcriber = aai.RealtimeTranscriber(
on_data=on_data,
on_error=on_error,
sample_rate=44_100,
)
# Start the connection
transcriber.connect()
# Only WAV/PCM16 single channel supported for now
file_stream = aai.extras.stream_file(
filepath="audio.wav",
sample_rate=44_100,
)
transcriber.stream(file_stream)
transcriber.close()
End-of-utterance controls
transcriber = aai.RealtimeTranscriber(...)
# Manually end an utterance and immediately produce a final transcript.
transcriber.force_end_utterance()
# Configure the threshold for automatic utterance detection.
transcriber = aai.RealtimeTranscriber(
...,
end_utterance_silence_threshold=500
)
# Can be changed any time during a session.
# The valid range is between 0 and 20000.
transcriber.configure_end_utterance_silence_threshold(300)
Disable partial transcripts
# Set disable_partial_transcripts to `True`
transcriber = aai.RealtimeTranscriber(
...,
disable_partial_transcripts=True
)
Enable extra session information
# Define a callback to handle the extra session information message
def on_extra_session_information(data: aai.RealtimeSessionInformation):
"This function is called when a session information message has been received."
print(data.audio_duration_seconds)
# Configure the RealtimeTranscriber
transcriber = aai.RealtimeTranscriber(
...,
on_extra_session_information=on_extra_session_information,
)
You'll find the Settings
class with all default values in types.py.
Change the default timeout and polling interval
import assemblyai as aai
# The HTTP timeout in seconds for general requests, default is 30.0
aai.settings.http_timeout = 60.0
# The polling interval in seconds for long-running requests, default is 3.0
aai.settings.polling_interval = 10.0
Visit our Playground to try our all of our Speech AI models and LeMUR for free:
When no TranscriptionConfig
is being passed to the Transcriber
or its methods, it will use a default instance of a TranscriptionConfig
.
If you would like to re-use the same TranscriptionConfig
for all your transcriptions,
you can set it on the Transcriber
directly:
config = aai.TranscriptionConfig(punctuate=False, format_text=False)
transcriber = aai.Transcriber(config=config)
# will use the same config for all `.transcribe*(...)` operations
transcriber.transcribe("https://example.org/audio.wav")
You can override the default configuration later via the .config
property of the Transcriber
:
transcriber = aai.Transcriber()
# override the `Transcriber`'s config with a new config
transcriber.config = aai.TranscriptionConfig(punctuate=False, format_text=False)
In case you want to override the Transcriber
's configuration for a specific operation with a different one, you can do so via the config
parameter of a .transcribe*(...)
method:
config = aai.TranscriptionConfig(punctuate=False, format_text=False)
# set a default configuration
transcriber = aai.Transcriber(config=config)
transcriber.transcribe(
"https://example.com/audio.mp3",
# overrides the above configuration on the `Transcriber` with the following
config=aai.TranscriptionConfig(dual_channel=True, disfluencies=True)
)
Currently, the SDK provides two ways to transcribe audio files.
The synchronous approach halts the application's flow until the transcription has been completed.
The asynchronous approach allows the application to continue running while the transcription is being processed. The caller receives a concurrent.futures.Future
object which can be used to check the status of the transcription at a later time.
You can identify those two approaches by the _async
suffix in the Transcriber
's method name (e.g. transcribe
vs transcribe_async
).
There are two ways of accessing the HTTP status code:
- All custom AssemblyAI Error classes have a
status_code
attribute. - The latest HTTP response is stored in
aai.Client.get_default().latest_response
after every API call. This approach works also if no Exception is thrown.
transcriber = aai.Transcriber()
# Option 1: Catch the error
try:
transcript = transcriber.submit("./example.mp3")
except aai.AssemblyAIError as e:
print(e.status_code)
# Option 2: Access the latest response through the client
client = aai.Client.get_default()
try:
transcript = transcriber.submit("./example.mp3")
except:
print(client.last_response)
print(client.last_response.status_code)
By default we poll the Transcript
's status each 3s
. In case you would like to adjust that interval:
import assemblyai as aai
aai.settings.polling_interval = 1.0
If you previously created a transcript, you can use its ID to retrieve it later.
import assemblyai as aai
transcript = aai.Transcript.get_by_id("<TRANSCRIPT_ID>")
print(transcript.id)
print(transcript.text)
You can also retrieve multiple existing transcripts and combine them into a single TranscriptGroup
object. This allows you to perform operations on the transcript group as a single unit, such as querying the combined transcripts with LeMUR.
import assemblyai as aai
transcript_group = aai.TranscriptGroup.get_by_ids(["<TRANSCRIPT_ID_1>", "<TRANSCRIPT_ID_2>"])
summary = transcript_group.lemur.summarize(context="Customers asking for cars", answer_format="TLDR")
print(summary)
Both Transcript.get_by_id
and TranscriptGroup.get_by_ids
have asynchronous counterparts, Transcript.get_by_id_async
and TranscriptGroup.get_by_ids_async
, respectively. These functions immediately return a Future
object, rather than blocking until the transcript(s) are retrieved.
See the above section on Synchronous vs Asynchronous for more information.