DailyAI SDK: How do I have an intro message with the intake starter app example?

Options
suvid
suvid Member

Hi!

I'm trying to create a conversational agent using a modified code from the patient-intake starter app: https://github.com/daily-co/daily-ai-sdk/blob/main/examples/starter-apps/patient-intake.py

I want to create an intro message where once the agent joins and the participant joins, a welcome message plays that OpenAI is also contextually aware (adjusted in prompt)

Currently, the code is like this:
@transport.event_handler("on_first_other_participant_joined")
async def on_first_other_participant_joined(transport):
await pipeline.queue_frames([OpenAILLMContextFrame(context)])

If I were to add an intro message like — await transport manually.say ("intro message", tts) for some reason, the intro message plays, and then the prompt is still injected into the OPenAILLMContextFrame, and then the TTS does a greeting.

How can I control this behavior? Do I first inject the greeting into the context instead of the system, but for the assistant, wait for the user to say something, then add a prompt to the system? If I add both, it will lead to erroneous behavior, so I'm trying to understand if I need to make changes to the pipeline and the openAI context.

Any help is super super appreciated!

Thanks.

Tagged:

Answers

  • suvid
    suvid Member
    Options

    @chad Saw your code on the ai sdk not sure if you can help here :)

  • suvid
    suvid Member
    Options

    Never mind figured it out. Just removed queuing the current context into the pipeline on_participant_joined and added the intro message to the context which allowed waiting for user to answer and the next LLM frame kept context in mind — code incase someone else could use it:

    def add_initial_message_to_context(self): self.context.add_message({ "role": "assistant", "content": self.intro_message, })

    @self.transport.event_handler("on_first_other_participant_joined") async def on_first_other_participant_joined(transport): await transport.say(self.intro_message, self.tts) self.add_initial_message_to_context()

  • chad
    chad Community Manager, Moderator, Dailynista admin
    Options

    Sorry I missed this! I was offline for some family travel. :) What you've done is essentially what I'd recommend, especially if you want to write the intro message yourself, instead of letting the LLM write it for you. By manually shoving the intro message into the context object like you're doing, you don't have to run the LLM Service, which emits a bunch of TextFrames that that TTS will pick up.

    If I can help with anything else, let me know!

  • suvid
    suvid Member
    Options

    Awesome. Thanks @chad. One more q: Any ideas on how to incorporate echo/noise canceling into the pipeline? I've integrated the app with Twilio and have Twilio hit my server and then dial into the room with the SIP URI. Somewhere, there's a ton of echo being created since it picks up the User saying the exact things that the AI intro message said and basically starts having a conversation with itself lol.

    It only happens on speaker-phone, not when having the phone to the ear.

  • suvid
    suvid Member
    Options

    Update: I also tried Vonage somehow; it's worse. Haven't tried Telnyx yet.