Process of actual online chat room of React all Hook project: add a live music broadcast?

Keywords: Front-end React

Previously on:

Actual online chat room course of React full Hook project (I): basic functions
Actual online chat room course of React full Hook project (II): quotation and topic function

text

What should chat have? Background music, tea and wine, snacks, I can't realize the last two, but we can add a background music function to our online chat room.

The initial idea is to write a pile of music outside the chain at the front end, and then let it play automatically, but there is no feeling of background music, because the songs we hear are different.

So can we do a "live broadcast" so that everyone can hear the same song and the same progress?

Through consulting the data, I found two methods. One is to push the stream with RTMP protocol + ffmpeg and receive it at the front end. The second method is to let the back end cut the music file into sections, add the header information, and then transmit it to the front end through WebSocket. After the front end gets it, it can be played directly. The method of adding header information here refers to the boss's blog Front end playback of blob voice stream.

One drawback of both methods is that it consumes a lot of traffic and a great burden on the server. And there is another problem in the second method. I don't understand how often to send a piece of data. If a friend who knows wants to tell me in the comment area.

So think about it and find a opportunistic way: list the song list on the server, including the song duration and the song URL, and then run a timer on the server to calculate the current song progress, and then broadcast the song information and progress through WebSocket when the time comes. When the front end comes in, it will first receive the song information and progress, then put the URL in < audio >, set the currentTime, and then play it automatically.

Server:

// ws.js
// Profile with music played
/* ...... */
var radio_config = require("../upload/music/config.json");
var process_s = 0; // Progress (seconds)
var song_index = 0; // Song index
/* ...... */
// Calculate playback progress
function calcRadioProcess() {
    setInterval(() => {
        if (process_s > radio_config[song_index].duration) {
            song_index = song_index + 1 >= radio_config.length ? 0 : song_index + 1
            process_s = 0
            console.log("Cut song", radio_config[song_index])
            bc(clientList, JSON.stringify({
                type: 'song',
                song: radio_config[song_index],
                current: 0
            }))
        } else {
            process_s += 1
        }
    }, 1000)
}
calcRadioProcess()
router.get("/getRadioProcess", (req, res) => {
    res.setHeader('Access-Control-Allow-Origin', '*');
    res.end(JSON.stringify({
        "success": true,
        "data": {
            type: 'song',
            song: radio_config[song_index],
            current: process_s
        }
    }))
})
/* ...... */
// 
router.ws("/", function (ws, req) {
    ws.clientId = req.query.id
    clientList.push(ws)
    console.log("new IP" + req.query.id);
    console.log("Number of people currently online" + clientList.length);
    // When websocket comes in, it will tell the music and progress being played
    ws.send(JSON.stringify({
        type: 'song',
        song: radio_config[song_index],
        current: process_s
    }))
/* ...... */

front end
Write a Radio class

import { useReducer, useRef, useEffect, useImperativeHandle, useState } from "react";
import { forwardRef } from "react";

function musicListReducer(state, action) {
    switch (action.type) {
        case 'add':
            return [...state, action.data]
        case 'shift':
            return state.slice(1)
        case 'init':
            return [action.data]
        default:
            throw new Error("FALSE action.type");
    }
}
var MyRadio = forwardRef((props, ref) => {
    // Music List
    const [musicList, setMusicList] = useReducer(musicListReducer, [])
    function setMusic(data) {
        if (firstClick === false) {
            setMusicList({ type: 'init', data: data })
        } else {
            setMusicList({ type: 'add', data: data })
        }
    }
    // Automatically cut the song after playing
    useEffect(() => {
        audioRef.current.onended = () => {
            setMusicList({ type: 'shift' })
        }
    }, [])

    // Due to the restrictions of Google browser's new protocol, it cannot be played automatically
    // The default is mute when you come in
    // Click to start playing
    const [playing, setPlaying] = useState(false);
    const [firstClick, setFirstClick] = useState(false);
    const audioRef = useRef(null);
    function setCurrentTime(sec) {
        audioRef.current.currentTime = sec;
    }
    function switchPlaying() {
        if (firstClick === false) {
            setFirstClick(true)
            // If it is the first time to broadcast, you need to get the progress of the first song first
            fetch("http://localhost:8080/ws/getRadioProcess").then((response) => {
                return response.json()
            }).then(json => {
                const data = json.data
                setCurrentTime(data.current)
                audioRef.current.play();
            })
        }
        setPlaying(!playing);
    }

    useImperativeHandle(ref, () => ({
        setMusic,
        setCurrentTime
    }))
    // Use emoji. Don't bother with SVG
    return (
        <div onClick={switchPlaying}>
            <audio style={{ display: 'none' }} src={musicList[0]?.url} ref={audioRef} autoPlay muted={!playing}></audio>
            {playing ? <span>🔈</span> : <span>🔕</span>}
            <span>Playing:{musicList[0]?.name}</span>
        </div>
    )
})

export default MyRadio

Episode

If your browser is Google browser, setting audio to autoplay directly or calling audio.play() directly will report an error DOMException: play() failed because the user didn't interact with the document first. Therefore, it is silent at the beginning. Music will be played only after the user clicks. Because there is a time gap between the time when the user clicks and the establishment of websocket, the progress must be synchronized again when clicking for the first time, and then it can be played automatically.

Posted by dudejma on Tue, 16 Nov 2021 01:14:21 -0800