making sure my JavaScript function isn't too slow

[Sat, 16 Jan 2016 16:40:17 +0000]
As I mentioned the other day [], I'm working on some JavaScript that will, in part, remove all hyperlinks (while retaining the linked text) from ContentDM item and compound item description metadata that's visible to the user. When I first started on the script, I was using jQuery to remove the links and retain the text. But I decided I didn't want to use jQuery because I'm always concerned about it ... maybe it's just stubbornness on my part. While I think some people's worries about performance are misdirected in certain instances (it just doesn't matter sometimes if your bi-weekly, overnight script takes 1 second or 1.5 to run), I think live alteration to the DOM is definitely a time to worry about performance. So I looked for an alternative, pure JavaScript solution and found this [] page. For a while, I used a variant on the JS code on that site and things worked fine, but then I got worried because that function seemed to be re-writing the DOM multiple times, once for each "a" tag that was present within my selected parent element. So, I wrote my own function to collect all the "a" tag text as well as the text of text-node elements within the parent, concatenate all that text, and then do a one-time rewrite of the inner HTML of the parent element. My function should, however, leave absolute links in the the metadata intact. It seems to be working fine, but I wanted to make sure it was at least generally faster than the jQuery version, otherwise there was no point to using my own function. I just did three successive executions of a modified version of my function (to remove some conditions specifically for the ContentDM usage), the function on the page I referenced earlier, and the jQuery. I named each function, respectively, "myJS", "theirJS", and "ourJQ". I added a function, "doFn", that would execute the function and return the time in milliseconds that it took to run. After running each function three times in succession, I recorded the run times. Both "myJS" and "theirJS" took 1, 0, and 1 milliseconds to run. The jQuery took 7, 4, and 4 milliseconds. Now, I'm not going to waste time worrying about how to make the tests the most fair in terms of checking for conditions, etc. For me, that falls in line with the optimizing unnecessarily thing I was talking about earlier. Bottom line, I'll keep my script and definitely not be using the jQuery version. Below is the test HTML I ran. <!DOCTYPE html> <html> <head> <script src=""></script> </head> <body> <p id='perftest'> <span><a href='foo'>foo</a><span> <span><a href='foo'>foo</a><span> <span><a href='foo'>foo</a><span> <span>bar</span> </p> </body> <script> var myJS = function() { var parent = document.getElementById('perftest'); var children = parent.childNodes; var replacement = []; for (var c=0; c<children.length; c++) { var child = children[c]; //var tag = child.nodeName; //if (tag === 'A' && child.hasAttribute('href') && child.attributes.href.value.substring(0,1) !== '/') { // only for relative links. //replacement.push(child.outerHTML); //} //else { replacement.push(child.textContent); //} } parent.innerHTML = replacement.join(''); return; }; var theirJS = function() { //var emptyAnchors = document.querySelectorAll('.mainmenu a:not([href])'); var emptyAnchors = document.querySelectorAll('#perftest a'); var content = ""; for (var a in emptyAnchors) { if(emptyAnchors[a].nodeType==1) { content = document.createTextNode(emptyAnchors[a].innerHTML); emptyAnchors[a].parentNode.insertBefore(content, emptyAnchors[a]); emptyAnchors[a].parentNode.removeChild(emptyAnchors[a]); } } }; var ourJQ = function() { //$('.mainmenu a:not([href])').contents().unwrap(); $('#perftest a').contents().unwrap(); }; var doFn = function(fn) { var begin = new Date().getMilliseconds(); fn(); var end = new Date().getMilliseconds(); var elapsed = end - begin; console.log(elapsed.toString()); }; </script> </html>