Because of the JS single threaded model, though, requests can be processed asynchronously. But in the end, it needs to be handled by the main line
In order to decouple rendering (request, calculation), the api request layer built by axios is modified, and all data requests are submitted to web work. Separate rendering from request
problem
- How many workers are suitable to exist at the same time?
Theoretically, there is no upper limit for worker s. You can open as many as you want, according to the actual situation. It is not recommended to open the corresponding quantity according to the number of CPU(navigator.hardwareConcurrency) cores
The current request / render separation is to turn on four threads as daemons. In addition to IE6,7 supports at least 4 concurrent requests
- How do multiple workers work together?
It is necessary to consider the unified deployment of multiple worker s and load balancing.
- The open worker can be stored in an array
- Load balancing can be handled by polling, most available algorithms, etc
- How should messages be handled?
Different from axios, worker s handle requests across different threads, and truly implement asynchronous requests. How to ensure the correct trigger callback
- Internally built through Promise, the caller Promise object is returned and a unique ID is assigned to the task. Save the task ID, resolve, reject and record at the same time
- The worker packs the returned data / exception with the received task ID and returns it
- pop the task out of the task pool through the task ID and execute the corresponding callback
- What technology is requested? Still using axios or fetch?
It's a bit of a tangle
- axios is easy to use
- But without using axios, you can reduce the volume of the compressed package
- If it is a project transformation, it needs to transform the axios format and the fetch api format
Finally, we choose fetch, and add interceptor to process the data in the internal implementation.
Another consideration is that the fetch is off xhr. Theoretically, the performance is better, but there are also disadvantages
- fetch only reports errors on network requests, and it is regarded as a successful request for 400500, which needs to be encapsulated for processing
- The fetch does not have cookie s by default. You need to add configuration items
- fetch does not support abort and timeout control. Timeout control implemented by setTimeout and Promise.reject does not prevent the request process from running in the background, resulting in a waste of quantity
- fetch has no way to monitor the progress of the request natively, while XHR can
- How is global configuration handled, such as request headers?
Preprocessing through interceptor
- Cross domain issues?
It is not recommended to use jsonp. You can add a cors header or a node for transit
- How to mix the code of worker with compression?
adopt
Yes web-worker The configuration is as follows
// Integrate worker loader based on vue-cli3 config.module .rule('worker') .test(/\.worker\.js$/) .use('worker-loader') .loader('worker-loader') .end() config.module.rule('js').exclude.add(/\.worker\.js$/)
webpack out of chain (not tested)
Realization
The whole idea will be divided into two parts
-
The thread group work group holds all the active work instances, provides message forwarding, balance algorithm support, callback mechanism implementation, and contains
- interceptor refers to axios' pre and post interception mechanism, providing default requests, corresponding processing and external exposure
export const interceptors = { // Pre adaptation conversion data type transfer(config) { return { url: `${window.location.origin}${process.env.VUE_APP_BASE_API}${config.url}${config.params ? '?' + param(config.params) : ''}`, options: { body: config.data ? JSON.stringify(config.data) : undefined, cache: config.cache, headers: config.headers || {}, method: config.method || 'GET' } } }, // Request filtering request(config) { if (store.getters.token) { config.options.headers['Auth'] = getToken() } return config }, // Response filtering async response(res) { return res } }
- The realization of load balancing in balance
const balance = (function() { let index = 0 /** Equalization algorithm */ const BALANCE_ALGORITHM = { /** Poll subscript */ ROUND_ROBIN() { const next = workers[index % workers.length] index++ return next } } const tasks = {} const addTask = (option, resolve, reject) => { const id = uuid() option.id = id tasks[id] = { resolve: resolve, reject: reject } } return { next() { return BALANCE_ALGORITHM.ROUND_ROBIN() }, postMessage(option, resolve, reject) { addTask(option, resolve, reject) this.next().postMessage(option) }, popTask(id) { const task = tasks[id] delete task[id] return task } } })()
- Workers workers are grouped with corresponding instances
const workers = new Array(4) for (let i = 0; i < workers.length; i++) { const worker = new Worker() worker.onmessage = receive workers[i] = worker }
-
The internal working group of the working group implements the queue mechanism, and the received tasks consume replies one by one. Through the queue mechanism, the peak of high concurrent requests is prevented. Through a reasonable mechanism, the peak of requests is cut, and the browser has a upper limit on single domain requests (take Chrome as an example, the upper limit is 6). So the internal part is actually a state machine, which includes:
- Queue request task queue
const queue = []
- State current running state, idle by default
/** In operation */ const RUNNING = 'RUNNING' /** free */ const IDLE = 'IDLE' /** current state */ let state = IDLE
- Request request function
/** * Execute request * Send the request through the fetch and send the feedback to the main thread * Check queue after end * @param event event main-thread message */ const request = (event) => { state = RUNNING const { data } = event fetch(data.url, data.options) .then(response => response.json()) .then(json => { postMessage({ success: true, response: json, id: data.id }) }) .catch(reason => { postMessage({ error: true, message: reason.message, id: data.id }) }) .finally(() => { const next = queue.pop() if (next) { request(next) } else { state = IDLE } }) }
Insufficient
- How to deal with overtime and other problems?
- How to handle file upload?
- Add load policy beyond polling?