[Weekly Update] Listen, Talk, and Contribute

🌊 Hello developers!
This week, I've got three things to highlight:
- Listen to a new podcast episode about AI & Daily
- Try talking to an LLM
- Contribute to the community forum
🎧 Listen
Last week, @kwindla appeared on the AI Chat podcast with Jaeden Schafer to talk about how Daily is helping developers build video applications powered by AI.
💬 Talk
Have you tried our Talk to an LLM demo yet?
- @anniesexton shows you how to use the demo in this video
- Click here to try the demo for yourself
- Then check out the blog post that explains the underlying technology
💡 Contribute
We recently launched a Top Community Contributors leaderboard in the forum.
Okay, that's it for now! I'll see you in the forum.
Comments
-
I have just tried "Talking to LLM". It was cool, so to say! This is really a very interesting development that could open new ways voice applications could be developed going forward. After watching the demo, I read through Kwindla Kramer's blog "How to talk to an LLM (with voice)". It was a very interesting and insightful blog. I recommend the blog to any developer who is curious to know how all these work together.
The only snag for me was that the demo and blog, used python as the language of choice (I can understand Python is winning the language war because of AI). Nonetheless, on a second thought, I realized that nearly 95% of the work could be done in JavaScript (see "steps common to every speech-to-speech AI app" in the blog). Steps 1 to 4 could easily be done in JavaScript, the missing piece is integrating the output of TTS (step 4) into Daily WebRTC video (step 5).
Is there any plan to complete the cycle in Daily JavaScript SDK (Daily React) similar to the integration you did with Deepgram, that took care of the STT portion (step 2)?
Thanks,
Jonathan Ekwempu
1 -
Hey Jonathan! That's a solid idea, and one I've been advocating for internally in a slightly different form. :)
You could probably do what you're talking about today by sending that raw audio data to an
AudioContext()
and usingsetInputDevicesAsync()
to set the audio context's output as your mic device:The problem with that approach is that it still requires some browser APIs to work, meaning you're relying on some participant in the call to do all the AI work. Maybe it's my old-fashioned background, but I prefer being able to do a lot of this stuff from some kind of server(less) process. That's what's nice about the daily-python approach; it's totally separate from the other participants in the call. My hope is that we'll have a similar
daily-node
library in the near future that will allow for similar server-side AI, but using JS instead of python.0 -
Thank 's Chad, for your response and suggestion. I agree with you that the serverless process approach using daily-python is the best approach. I can't wait to see it implemented in JavaScript.
Hmn! it's like AI will force us to learn how to program in Python.
Thanks, again.
Jonathan
0 -
Is this banner native to daily with the countdown? If so how can I implement it, it would save so much time.
0 -
Hi @reeceatkinson is it the countdown specifically you're referring to? That is the countdown shown on daily meetings where a time limit has been set for it to expire. The countdown will show users that the meeting is nearing that set
exp
time and will shut down after that:
0 -
@Tamara got it, thanks!
0 -
You're welcome @reeceatkinson !
0