I put together this Native Messaging performance test to determine based on evidence which programming language, JavaScript engine or runtime, and WebAssembly compiled code is fastest to round trip 1 MB, which is the maximum amount od data a host can send to a client in one message.
I think I kept all logging out of the functionality. I don;t think I'm missing anything in the timing evaluation, either. If I am, kindly let me know.
The code runs each listed client and host at a time, and repeats the run based on the number passed to the function. The test is run in DevTools in a Web extension page, where each host the extension listed in allowed_origins in the host manifest. That's it.
async function nativeMessagingPerformanceTest(i = 10) {
const runtimes = new Map([
["nm_assemblyscript", 0],
["nm_bun", 0],
["nm_c", 0],
["nm_cpp", 0],
["nm_d8", 0], // Uses subprocess to read STDIN
["nm_deno", 0],
["nm_llrt", 0], // Uses subprocess to read STDIN
["nm_nodejs", 0],
["nm_python", 0],
["nm_qjs", 0],
["nm_rust", 0],
["nm_shermes", 0],
["nm_spidermonkey", 0], // Special treatment, requires additional "\r\n\r\n" from client
["nm_tjs", 0],
["nm_typescript", 0],
["nm_wasm", 0],
]);
for (let j = 0; j < i; j++) {
for (const [runtime] of runtimes) {
console.log(`${runtime} run no. ${j} of ${i}}`);
try {
const { resolve, reject, promise } = Promise.withResolvers();
const now = performance.now();
const port = chrome.runtime.connectNative(runtime);
port.onMessage.addListener((message) => {
console.assert(message.length === 209715, {
message,
runtime,
});
const n = runtimes.get(runtime);
runtimes.set(runtime, n + ((performance.now() - now) / 1000));
port.disconnect();
resolve();
});
port.onDisconnect.addListener(() => reject(chrome.runtime.lastError));
port.postMessage(new Array(209715));
// Handle SpiderMonkey, send "\r\n\r\n" to process full message with js
if (runtime === "nm_spidermonkey") {
port.postMessage("\r\n\r\n");
}
await promise;
} catch (e) {
console.log(e, runtime);
continue;
}
}
await scheduler.postTask(() => {
delay: 10;
});
}
const sorted = [...runtimes].map(([k, n]) => [k, n / i]).sort((
[, a],
[, b],
) => a < b ? -1 : a === b ? 0 : 1);
console.table(sorted);
}
await nativeMessagingPerformanceTest(10);
performance.now()- huge allocations, assertions, etc. Yes, it should be a static overhead, roughly the same for all runs, but it might be not. It's already difficult to benchmark something in the browser reliably, don't add more variables. What is the time scale of the operation you're trying to measure - is that seconds? 100s of milliseconds? Milliseconds or less? The faster the operation itself, the more sensitive to random fluctuations (other apps, OS scheduling, power mode change, CPU core differences, ...) is your benchmark. \$\endgroup\$performance.now(), I believe? 2. You don't want to measure random slowdowns of your machine, another app deciding to fetch its background notifications while your bench is running, etc., so the less unrelated work happens in the benchmark body (between entrynow()and exitnow(), the more representative it usually is. \$\endgroup\$