Note On <High Performance JavaScript> - 01

Keywords: Javascript Firefox







Chapter 4: Document Object Model (DOM) operations


Generally speaking, DOM implementation and JavaScript implementation in a browser are two separate modules. JavaScript is an independent scripting language, and its own specification does not contain DOM. Therefore, the engine of JavaScript is an independent part. The part of DOM implementation is usually called rendering engine. Because they are independent of each other, all DOM-related operations cause communication between the two modules, which results in reduced efficiency.


Firstly, the operations related to the Document Object Model that affect efficiency are:

  • Access and modify DOM elements;
  • Modify the style of DOM elements, causing the interface to be re-rendered.
  • User interaction is handled through DOM events.

====================================================================


Access and Modification of DOM Elements


The first question is: is it better to insert new HTML elements through non-standardized innerHTML attributes or through DOM methods, such as document.createElement() and document.createTextNode()?


The conclusion is: generally speaking, innerHTML will be more efficient, but this result is only tried on older browsers, new browsers are optimizing DOM methods more and more, so on the new version of Chrome and Firefox, DOM methods are faster. Note that this book was published in 2010, this year is 2017. In a word, the author concluded that the difference was not very big, so I don't need to pay too much attention to it. My feeling is that the DOM method should be the first choice now.



The second question is: will copying HTML nodes be faster than creating all new nodes directly?


CONCLUSION: The method of comparison is: first, create 200 new td nodes with createElement(); second, create a td node with createElement(), and then copy the remaining 199 with element.cloneNode(). The result is that element.cloneNode() is a little faster, but the gap is not obvious.



The third doubtful point is that a set of HTML elements can be obtained by the following methods:

  • document.getElementsByName()
  • document.getElementsByClassName()
  • document.getElementsByTagName()


There are also the following attributes:

  • document.images
  • document.links
  • document.forms
  • document.forms[0].elements


The following code causes an infinite loop:

// an accidentally infinite loop
var alldivs = document.getElementsByTagName('div');
for (var i = 0; i < alldivs.length; i++) {
	document.body.appendChild(document.createElement('div'))
}


The reason is that the set of HTML elements is always updated dynamically. In fact, every access to alldivs will result in a query operation on DOM. So inserting a new div in each loop will cause the set of alldivs to be recalculated and its length will grow, so the loop will be executed forever. Even ordinary arrays in JavaScript have this problem, as shown in the <Effective It's also mentioned in JavaScript: Item49.


The conclusion is that the result of a query saved by an array traverses much faster based on it. The demo code is as follows:

function toArray(coll) {
	for (var i = 0, a = [], len = coll.length; i < len; i++) {
		a[i] = coll[i];
	}
	return a;
}

var coll = document.getElementsByTagName('div');
var ar = toArray(coll);

//slower
function loopCollection() {
	for (var count = 0; count < coll.length; count++) {
		/* do nothing */
	}
}
// faster
function loopCopiedArray() {
	for (var count = 0; count < arr.length; count++) {
		/* do nothing */
	}
}

The method suggested by the author is to cache length in general.

function loopCacheLengthCollection() {
	var coll = document.getElementsByTagName('div'),
	len = coll.length;
	for (var count = 0; count < len; count++) {
		/* do nothing */
	}
}


Unless you have a large collection to access, in that case, you should also take into account the additional overhead of caching the collection elements. If DOM elements are accessed in a loop (which is also common), the author compares the following three methods, one faster than the other.

// slow
function collectionGlobal() {
	var coll = document.getElementsByTagName('div'),
	len = coll.length,
	name = '';
	for (var count = 0; count < len; count++) {
		name = document.getElementsByTagName('div')[count].nodeName;
		name = document.getElementsByTagName('div')[count].nodeType;
		name = document.getElementsByTagName('div')[count].tagName;
	}
	return name;
};

// faster
function collectionLocal() {
	var coll = document.getElementsByTagName('div'),
	len = coll.length,
	name = '';
	for (var count = 0; count < len; count++) {
		name = coll[count].nodeName;
		name = coll[count].nodeType;
		name = coll[count].tagName;
	}
	return name;
};

// fastest
function collectionNodesLocal() {
	var coll = document.getElementsByTagName('div'),
	len = coll.length,
	name = '',
	el = null;
	for (var count = 0; count < len; count++) {
		el = coll[count];
		name = el.nodeName;
		name = el.nodeType;
		name = el.tagName;
	}
	return name;
};


To sum up, caching can be done, especially the collection of elements obtained by the methods and attributes mentioned above, such as coll in the code, which actually leads to re-query every time it is accessed, so caching it is very helpful.


====================================================================


Search for DOM elements





Posted by MichaelMackey on Mon, 17 Dec 2018 01:30:04 -0800