Real Time communication between user and gpt model through server

Hello, I'm currently developing a web app for AI-driven user interviews. Our backend is constructed using Node.js, and we've integrated WebRTC for user-GPT model communication. We're focused on transmitting user audio to the server (server-to-model) and receiving responses in the form of audio tracks or WAV files for user feedback. During my research, I discovered the website and observed some LLM

demos. I'm interested in understanding how can be applied or integrated into this scenario. Thank you.



  • chad
    chad Community Manager, Moderator, Dailynista admin

    Well, you can't quite do this with node yet (although I'm pushing for it internally), but you can absolutely do it if you're willing to use a little bit of Python:

    The Storybot code was actually spun out of a project for AI-powered interviews. (The code got a lot more complex when we added image generation!)

  • I initially thought that since I'm already proficient in Python, I would find it challenging to use WebSockets and WebRTC in Python. Therefore, we decided to proceed with Node.js since we are in the initial stages of development. However, we can switch to python. I just any library or some kind of service to help.