2
\$\begingroup\$

I put together this Native Messaging performance test to determine based on evidence which programming language, JavaScript engine or runtime, and WebAssembly compiled code is fastest to round trip 1 MB, which is the maximum amount od data a host can send to a client in one message.

I think I kept all logging out of the functionality. I don;t think I'm missing anything in the timing evaluation, either. If I am, kindly let me know.

The code runs each listed client and host at a time, and repeats the run based on the number passed to the function. The test is run in DevTools in a Web extension page, where each host the extension listed in allowed_origins in the host manifest. That's it.

async function nativeMessagingPerformanceTest(i = 10) {
  const runtimes = new Map([
    ["nm_assemblyscript", 0],
    ["nm_bun", 0],
    ["nm_c", 0],
    ["nm_cpp", 0],
    ["nm_d8", 0], // Uses subprocess to read STDIN
    ["nm_deno", 0],
    ["nm_llrt", 0], // Uses subprocess to read STDIN
    ["nm_nodejs", 0],
    ["nm_python", 0],
    ["nm_qjs", 0],
    ["nm_rust", 0],
    ["nm_shermes", 0],
    ["nm_spidermonkey", 0], // Special treatment, requires additional "\r\n\r\n" from client
    ["nm_tjs", 0],
    ["nm_typescript", 0],
    ["nm_wasm", 0],
  ]);
  for (let j = 0; j < i; j++) {
    for (const [runtime] of runtimes) {
      console.log(`${runtime} run no. ${j} of ${i}}`);
      try {
        const { resolve, reject, promise } = Promise.withResolvers();
        const now = performance.now();
        const port = chrome.runtime.connectNative(runtime);
        port.onMessage.addListener((message) => {
          console.assert(message.length === 209715, {
            message,
            runtime,
          });
          const n = runtimes.get(runtime);
          runtimes.set(runtime, n + ((performance.now() - now) / 1000));
          port.disconnect();
          resolve();
        });
        port.onDisconnect.addListener(() => reject(chrome.runtime.lastError));
        port.postMessage(new Array(209715));
        // Handle SpiderMonkey, send "\r\n\r\n" to process full message with js
        if (runtime === "nm_spidermonkey") {
          port.postMessage("\r\n\r\n");
        }
        await promise;
      } catch (e) {
        console.log(e, runtime);
        continue;
      }
    }
    await scheduler.postTask(() => {
      delay: 10;
    });
  }
  const sorted = [...runtimes].map(([k, n]) => [k, n / i]).sort((
    [, a],
    [, b],
  ) => a < b ? -1 : a === b ? 0 : 1);
  console.table(sorted);
}
await nativeMessagingPerformanceTest(10);
\$\endgroup\$
4
  • \$\begingroup\$ IMO your benchmark body is doing too much, esp. before final performance.now() - huge allocations, assertions, etc. Yes, it should be a static overhead, roughly the same for all runs, but it might be not. It's already difficult to benchmark something in the browser reliably, don't add more variables. What is the time scale of the operation you're trying to measure - is that seconds? 100s of milliseconds? Milliseconds or less? The faster the operation itself, the more sensitive to random fluctuations (other apps, OS scheduling, power mode change, CPU core differences, ...) is your benchmark. \$\endgroup\$ Commented Nov 3 at 19:59
  • \$\begingroup\$ @STerliakov 1. Make sure the 1 MB sent is the 1 MB received. 2. Fastest; all things being the same re other operations happening in parallel on the machine, the fastest will be known by evidence, in time. That's what the code does now. \$\endgroup\$ Commented Nov 4 at 5:05
  • \$\begingroup\$ 1. Yes, but you can do that after recording performance.now(), I believe? 2. You don't want to measure random slowdowns of your machine, another app deciding to fetch its background notifications while your bench is running, etc., so the less unrelated work happens in the benchmark body (between entry now() and exit now(), the more representative it usually is. \$\endgroup\$ Commented Nov 5 at 23:58
  • \$\begingroup\$ @STerliakov Yeah, I don't see what you're talking about. If I'm measuring random slowdowns, whatever that is, each programming language gets influenced by the same things, so it's all fair. \$\endgroup\$ Commented Nov 6 at 14:34

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.