Spatialization Demo

AndrewS
AndrewS Member

Hi all, Has anyone looked at the spatialization demo?

I'm trying to figure out where on the code it's asking for camera access from the browser. Ideally I want to call the camera in much sooner - the index.html point so the user can select their camera - but struggling a bit with it.

Any advice would be greatly appreciated.

Alternatively, did anyone ever try moving this demo to React?

Answers

  • Lazer
    Lazer Dailynista

    Hey @AndrewS! I am assuming you are talking about this spatialization demo I made, so I'll answer in relation to that.

    In this demo, daily-js handles device permissions by calling getUserMedia() internally when it wants access to the user's cam/mic. So there is nothing in the demo code itself handling device access except for some very rudimentary error handling here.

    It sounds like you want to have a pre-join lobby of sorts where the user can select their own camera before joining the demo. For that, I suggest checking out the startCamera() call object instance method. It can be called after a call object is instantiated, but before you join the call. This can let you you can start the user's devices and have Daily trigger the permissions dialog in advance.

    Then, if you want additional control over which available device is used in your app UI, you can use the enumerateDevices() instance method to get information about all accessible camera and microphone devices. Once you retrieve them, you can use the setInputDevicesAsync() instance method to switch which input device is being used for a participant.

    I hope that helps! Let me know if you have any other questions.

  • Ahhh you made it! It's beyond awesome. Thanks. I've really really struggled with this problem. Like, weeks. Mainly because everything comes from the .ts but I'm trying to add "hair check" to the index.html and don't know how. I just pointed to a JS script with

    async function requestMediaAccess() {

    try {

    await navigator.mediaDevices.getUserMedia({ audio: true, video: true });

    populateDevices();

    } catch (error) {

    console.error("Error accessing media devices:", error);

    }

    }

    function populateDevices() {

    navigator.mediaDevices.enumerateDevices().then((devices) => {

    const videoSelect = document.getElementById("cameraSelect");

    const audioSelect = document.getElementById("microphoneSelect");

    const nameInput = document.getElementById("userName");

    const previewName = document.getElementById("previewName");

    Type of thing, and it worked - but on entering the world - it just defaults to a secondary camera - hence the original question. There's an awesome hair checker on the Virtual Class demo. But that seems to go off into another .js file entirely (plus it's in React - which this is not).

    I just can't figure out how to do this from an index.html. Wish I'd paid more attention at school!

    Thanks again for the pointers - I'll keep trying.

  • Lazer
    Lazer Dailynista
    edited April 2023

    Glad you like the demo!

    You were largely on the right track with your code; I've made a quick new branch with a draft PR in the spatialization demo repo to show how this might be completed: https://github.com/daily-demos/spatialization/pull/21/files

    There are some dependency upgrades there which you can ignore, so the main files to look at are:

    • index.html, which just adds cam and mic dropdown elements.
    • room.ts, which removes some arguments (URL, ID) from the constructor and adds them to the join() method instead (since we'll now be creating the room before these arguments are available, as you'll see below)
    • index.ts - the main one, where all of the selection logic is.

    I've left comments inline in the code as well. With these changes, I construct a Daily room instance right away, before submitting the join form. Then I call startCamera() to enumerate over available devices and populate the cam and mic dropdowns. When a user selects a new device option from one of the dropdowns, this code tells Daily to use the new device by calling the setInputDevicesAsync() call object instance method.

    Does that help? (Note I have not tested this very extensively and this is just meant to be a starting point to give you an idea of a potential approach)

  • Ahhhh you're a star. Worked a dream. Some tweaking and it's flying. For the community reading this - slight change to the room.ts and you can include the output sound too and use:

    async function switchSpeaker(deviceId: string) {

    console.log(`Switching speaker to ${deviceId}`);

    globalRoom.callObject.setOutputDevice({ outputDeviceId: deviceId });

    }

    And

    function getSpeakerDropdown(): HTMLSelectElement { return <HTMLSelectElement>document.getElementById("speaker-select");

    }

    Plus a few other bits.

    Three more questions if you have a minute:

    1) When inside a room - the user moves around the room with a square camera view. Is there a way to make it a circle? It sounds stupid - but I've managed to tweak most pixi things and failed time and time again to change this.

    2) In the demo images - it showed a red perimeter to indicate where the spatial zone is for the user. Is there a way of adding that?

    3) (last one promise). Is there a way of adding moderation? Like a user with the owner token can boot someone or take control of the broadcast all


    Don't worry of you don't have time. I've been toying with this demo for a few weeks now and it's phenomenal - just got stuck on the last few hurdles.


    thanks again for your time on this.

  • I've just spent the past few hours trying to get the camera to be a circle. Driving me mad. I changed user.ts to:

      // Private methods below
      private setVideoTexture(forceReset = false) {
        // If the user already has a video texture set and we didn't
        // specify a force reset, early out.
        if (this.textureType === TextureType.Video && !forceReset) {
          return;
        }
    
        // If the user has no video track, early out.
        const videoTrack = this.media.getVideoTrack();
        if (!videoTrack) return;
    
        // If we're already waiting for a video texture to be set,
        // early out.
        if (this.videoTextureAttemptPending) {
          return;
        }
    
        // If the video tag we'll be using to create this texture is
        // not yet playing, create a pending attempt.
        if (!this.media.videoPlaying) {
          console.log(
            "video not playing; will set texture when play starts",
            this.userName
          );
          this.videoTextureAttemptPending = Date.now();
          this.setDefaultTexture();
          this.media.setDelayedVideoPlayHandler(() => {
            console.log("video started playing - applying texture", this.userName);
            this.media.setDelayedVideoPlayHandler(null);
            this.videoTextureAttemptPending = null;
            this.setVideoTexture();
          });
          return;
        }
    
        this.textureType = TextureType.Video;
    
        // Create a base texture using our video tag as the
        // backing resource.
        const resource = new PIXI.VideoResource(this.media.videoTag, {
          updateFPS: 15,
        });
        const texture = new PIXI.BaseTexture(resource, {
          mipmap: MIPMAP_MODES.OFF,
        });
        texture.onError = (e) => textureError(e);
    
           // Remove the existing textureMask
        // let textureMask: PIXI.Rectangle = null;
    
        // Set our texture mask to ensure correct dimensions
        // and aspect ratio based on the size of the backing
        // video track resource.
        let x = 0;
        let y = 0;
        let size = baseSize;
        const aspect = resource.width / resource.height;
        if (aspect > 1) {
          x = resource.width / 2 - resource.height / 2;
          size = resource.height;
        } else if (aspect < 1) {
          y = resource.height / 2 - resource.width / 2;
          size = resource.width;
        } else {
          texture.setSize(baseSize, baseSize);
        }
        // textureMask = new PIXI.Rectangle(x, y, size, size);
    
        // Create a new sprite with the video texture
        const videoSprite = new PIXI.Sprite(new PIXI.Texture(texture));
    
        // Create a new circular mask using PIXI.Graphics
        const circularMask = new PIXI.Graphics();
        circularMask.beginFill(0xffffff);
        circularMask.drawCircle(size / 2, size / 2, size / 2);
        circularMask.endFill();
    
        // Apply the circular mask to the videoSprite
        videoSprite.mask = circularMask;
    
        // Add the circularMask and videoSprite to the container
        this.addChild(videoSprite);
        this.addChild(circularMask);
        // Ensure our name label is of the right size and position
        // for the new texture.
        this.tryUpdateNameGraphics();
      }
    


    Which sort of worked - sometimes - before spitting out a hundred errors on entering and leaving a room


    Uncaught (in promise) TypeError: Cannot read properties of null (reading 'dispose')

      at mu.dispose (BaseRenderTexture.mjs:41:22)

      at mu.destroy (BaseTexture.mjs:180:10)

      at mu.destroy (BaseRenderTexture.mjs:45:11)

      at Ux.setDefaultTexture (user.ts:497:34)

      at Ux.updateZone (user.ts:202:12)

      at Zx.<anonymous> (deskZone.ts:168:14)

      at Generator.next (<anonymous>)

      at deskZone.ts:147:28

      at new Promise (<anonymous>)

      at i (deskZone.ts:147:28)

    Don't know if anyone else has ever tried this?

  • Lazer
    Lazer Dailynista

    Hey @AndrewS, thanks for getting a solid start on those circular tiles! I suspect the issue may be related to adding new children and not properly destroying them on every video texture set. I took your base code and made some changes here: https://github.com/daily-demos/spatialization/pull/21/files#diff-bd1b653621d94fd64026d675fc3160e8a7a7c8fa250292ff871730f9c9bcb76cR467

    This introduces a method to update the texture mask. This method only sets the mask once on the current sprite (without adding a new child sprite), if it does not already exist. If the mask does already exist, it only scales the existing mask instead of creating a new one.

    Note that I found textures in general to be a little bit fiddly across browsers! I tested quickly and this worked on Mac in Chrome, Firefox, and Safari, but FF especially tends to sometimes be a bit sensitive so this might need more testing and refinement - but you kind of get the idea! Also note that the collision logic here is unchanged to account for circular users, so you'll see that when entering a zone, the borders are not precise to the circle (the collision area is still a rectangle, so a corner will trigger collision detection even if you can't see that part of the sprite).

    For the red perimeter in the demo images you mentioned, I actually just added that on top of a screenshot to illustrate the area for the post. You could probably add another sprite type to add that to the world (maybe similar to how the border highlight around the desk zones is currently implemented in deskZone.ts (`createZoneMarker()`)

    For moderation, yup! You probably want to check out our meeting token guide here for that as a starting point: https://docs.daily.co/guides/privacy-and-security/meeting-tokens

    Hope that helps!

  • AndrewS
    AndrewS Member

    Ah you're brilliant. Everything is working. I've been working on this non-stop for the past few days and made some great additions including

    • Graphic generator
    • In work camera changing modal
    • Room and desk generation CMS

    One last thing I'm struggling with. Could I add the Daily Chat into each Deskzone? Like a chat for the users within a deskzone? I tried a million variations and failed miserably.

  • Lazer
    Lazer Dailynista

    So for desk-specific chat with _just_ Daily's APIs, you'd likely want to utilize `"app-message"` events (there's already an example of these in the demo, just not used for chat). We've got a chat implementation for Daily Prebuilt, but this demo uses a custom call object instance, so you'd need to implement the chat parts yourself. You'd basically send a participant's chat message inside an `"app-message"` to all other participants in their "desk zone" (targeting them by their session ID), and display the content in your new chat UI DOM element.

    I actually wrote a post that sort of illustrates this recently, though in the context of server-side app messages. But the accompanying demo shows the client-side usage of it as well: https://www.daily.co/blog/sending-data-to-call-participants-from-a-server-with-dailys-new-rest-api-endpoint/

    You can try out the accompanying demo here: https://app-message.netlify.app/

    Join a room above with two people, and click "Broadcast from client". You can select to broadcast to "all", or just the other participant in a dropdown menu there (note that if choosing "all", the local participant who is doing the broadcasting will not see their own message).

    So you can do it in a similar way, but of course taking some text input instead of just having a text template to broadcast. Does that make sense?

  • AndrewS
    AndrewS Member

    Thanks Lazer! Bits of it make sense lol - other bits, not so much - thanks for your links. I'll see if I can get my head around it!

  • Lazer
    Lazer Dailynista

    Sounds good! Just reach out if there's anything I can clarify further.